Apr 16 17:40:31.530148 ip-10-0-134-233 systemd[1]: Starting Kubernetes Kubelet... Apr 16 17:40:31.926602 ip-10-0-134-233 kubenswrapper[2560]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 17:40:31.926602 ip-10-0-134-233 kubenswrapper[2560]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 16 17:40:31.926602 ip-10-0-134-233 kubenswrapper[2560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 17:40:31.926602 ip-10-0-134-233 kubenswrapper[2560]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 17:40:31.926602 ip-10-0-134-233 kubenswrapper[2560]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 17:40:31.927800 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.927356 2560 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 17:40:31.933192 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933167 2560 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 17:40:31.933192 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933187 2560 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 17:40:31.933192 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933191 2560 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 17:40:31.933192 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933195 2560 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 17:40:31.933192 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933197 2560 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 17:40:31.933192 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933200 2560 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933205 2560 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933210 2560 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933214 2560 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933217 2560 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933220 2560 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933222 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933225 2560 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933227 2560 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933230 2560 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933233 2560 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933236 2560 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933238 2560 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933241 2560 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933243 2560 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933246 2560 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933249 2560 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933251 2560 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933254 2560 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933257 2560 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 17:40:31.933421 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933260 2560 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933262 2560 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933265 2560 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933268 2560 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933270 2560 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933273 2560 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933275 2560 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933278 2560 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933280 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933282 2560 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933285 2560 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933288 2560 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933291 2560 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933293 2560 feature_gate.go:328] unrecognized feature gate: Example Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933297 2560 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933299 2560 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933302 2560 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933305 2560 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933308 2560 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933310 2560 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933313 2560 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 17:40:31.933928 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933316 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933319 2560 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933321 2560 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933324 2560 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933327 2560 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933330 2560 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933332 2560 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933335 2560 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933337 2560 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933340 2560 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933343 2560 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933347 2560 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933350 2560 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933352 2560 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933355 2560 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933357 2560 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933360 2560 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933363 2560 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933365 2560 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 17:40:31.934447 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933368 2560 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933371 2560 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933373 2560 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933376 2560 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933378 2560 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933380 2560 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933384 2560 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933387 2560 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933390 2560 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933396 2560 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933400 2560 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933403 2560 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933406 2560 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933410 2560 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933413 2560 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933416 2560 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933419 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933422 2560 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933425 2560 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 17:40:31.934922 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933428 2560 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.933431 2560 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934443 2560 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934451 2560 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934455 2560 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934458 2560 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934461 2560 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934464 2560 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934467 2560 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934470 2560 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934473 2560 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934476 2560 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934478 2560 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934481 2560 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934484 2560 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934487 2560 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934490 2560 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934492 2560 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934495 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 17:40:31.935390 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934498 2560 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934501 2560 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934504 2560 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934506 2560 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934509 2560 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934511 2560 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934514 2560 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934517 2560 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934520 2560 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934522 2560 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934525 2560 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934527 2560 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934530 2560 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934532 2560 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934535 2560 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934537 2560 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934540 2560 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934542 2560 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934547 2560 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 17:40:31.935858 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934550 2560 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934552 2560 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934555 2560 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934558 2560 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934561 2560 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934566 2560 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934569 2560 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934571 2560 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934574 2560 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934576 2560 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934579 2560 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934582 2560 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934584 2560 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934588 2560 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934591 2560 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934594 2560 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934596 2560 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934599 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934601 2560 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934604 2560 feature_gate.go:328] unrecognized feature gate: Example Apr 16 17:40:31.936361 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934607 2560 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934609 2560 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934612 2560 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934615 2560 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934617 2560 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934620 2560 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934622 2560 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934625 2560 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934628 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934630 2560 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934633 2560 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934635 2560 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934638 2560 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934640 2560 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934643 2560 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934645 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934648 2560 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934650 2560 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934653 2560 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 17:40:31.936862 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934655 2560 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934658 2560 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934660 2560 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934663 2560 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934665 2560 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934668 2560 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934674 2560 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934678 2560 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934681 2560 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934684 2560 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.934686 2560 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934761 2560 flags.go:64] FLAG: --address="0.0.0.0" Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934768 2560 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934775 2560 flags.go:64] FLAG: --anonymous-auth="true" Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934779 2560 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934784 2560 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934788 2560 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934793 2560 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934797 2560 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934801 2560 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934804 2560 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 16 17:40:31.937320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934808 2560 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934812 2560 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934815 2560 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934818 2560 flags.go:64] FLAG: --cgroup-root="" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934821 2560 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934823 2560 flags.go:64] FLAG: --client-ca-file="" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934826 2560 flags.go:64] FLAG: --cloud-config="" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934829 2560 flags.go:64] FLAG: --cloud-provider="external" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934844 2560 flags.go:64] FLAG: --cluster-dns="[]" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934850 2560 flags.go:64] FLAG: --cluster-domain="" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934866 2560 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934870 2560 flags.go:64] FLAG: --config-dir="" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934873 2560 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934877 2560 flags.go:64] FLAG: --container-log-max-files="5" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934881 2560 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934884 2560 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934887 2560 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934891 2560 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934894 2560 flags.go:64] FLAG: --contention-profiling="false" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934897 2560 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934901 2560 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934904 2560 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934907 2560 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934911 2560 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934915 2560 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 16 17:40:31.937840 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934918 2560 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934920 2560 flags.go:64] FLAG: --enable-load-reader="false" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934925 2560 flags.go:64] FLAG: --enable-server="true" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934928 2560 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934933 2560 flags.go:64] FLAG: --event-burst="100" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934936 2560 flags.go:64] FLAG: --event-qps="50" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934939 2560 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934943 2560 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934946 2560 flags.go:64] FLAG: --eviction-hard="" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934950 2560 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934953 2560 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934956 2560 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934968 2560 flags.go:64] FLAG: --eviction-soft="" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934972 2560 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934975 2560 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934978 2560 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934981 2560 flags.go:64] FLAG: --experimental-mounter-path="" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934984 2560 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934987 2560 flags.go:64] FLAG: --fail-swap-on="true" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934990 2560 flags.go:64] FLAG: --feature-gates="" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934994 2560 flags.go:64] FLAG: --file-check-frequency="20s" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.934997 2560 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935000 2560 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935003 2560 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935007 2560 flags.go:64] FLAG: --healthz-port="10248" Apr 16 17:40:31.938461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935010 2560 flags.go:64] FLAG: --help="false" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935013 2560 flags.go:64] FLAG: --hostname-override="ip-10-0-134-233.ec2.internal" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935016 2560 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935019 2560 flags.go:64] FLAG: --http-check-frequency="20s" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935021 2560 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935025 2560 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935028 2560 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935031 2560 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935034 2560 flags.go:64] FLAG: --image-service-endpoint="" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935036 2560 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935040 2560 flags.go:64] FLAG: --kube-api-burst="100" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935043 2560 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935046 2560 flags.go:64] FLAG: --kube-api-qps="50" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935049 2560 flags.go:64] FLAG: --kube-reserved="" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935052 2560 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935054 2560 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935063 2560 flags.go:64] FLAG: --kubelet-cgroups="" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935066 2560 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935069 2560 flags.go:64] FLAG: --lock-file="" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935072 2560 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935075 2560 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935078 2560 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935084 2560 flags.go:64] FLAG: --log-json-split-stream="false" Apr 16 17:40:31.939089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935087 2560 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935090 2560 flags.go:64] FLAG: --log-text-split-stream="false" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935093 2560 flags.go:64] FLAG: --logging-format="text" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935096 2560 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935099 2560 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935102 2560 flags.go:64] FLAG: --manifest-url="" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935105 2560 flags.go:64] FLAG: --manifest-url-header="" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935110 2560 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935113 2560 flags.go:64] FLAG: --max-open-files="1000000" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935117 2560 flags.go:64] FLAG: --max-pods="110" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935120 2560 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935123 2560 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935126 2560 flags.go:64] FLAG: --memory-manager-policy="None" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935129 2560 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935132 2560 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935135 2560 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935139 2560 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935149 2560 flags.go:64] FLAG: --node-status-max-images="50" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935152 2560 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935155 2560 flags.go:64] FLAG: --oom-score-adj="-999" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935159 2560 flags.go:64] FLAG: --pod-cidr="" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935162 2560 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc76bab72f320de3d4105c90d73c4fb139c09e20ce0fa8dcbc0cb59920d27dec" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935167 2560 flags.go:64] FLAG: --pod-manifest-path="" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935170 2560 flags.go:64] FLAG: --pod-max-pids="-1" Apr 16 17:40:31.939640 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935173 2560 flags.go:64] FLAG: --pods-per-core="0" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935176 2560 flags.go:64] FLAG: --port="10250" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935180 2560 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935183 2560 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0fd8d5ffdac616109" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935187 2560 flags.go:64] FLAG: --qos-reserved="" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935189 2560 flags.go:64] FLAG: --read-only-port="10255" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935192 2560 flags.go:64] FLAG: --register-node="true" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935196 2560 flags.go:64] FLAG: --register-schedulable="true" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935201 2560 flags.go:64] FLAG: --register-with-taints="" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935205 2560 flags.go:64] FLAG: --registry-burst="10" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935208 2560 flags.go:64] FLAG: --registry-qps="5" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935210 2560 flags.go:64] FLAG: --reserved-cpus="" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935213 2560 flags.go:64] FLAG: --reserved-memory="" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935217 2560 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935220 2560 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935224 2560 flags.go:64] FLAG: --rotate-certificates="false" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935226 2560 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935229 2560 flags.go:64] FLAG: --runonce="false" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935232 2560 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935235 2560 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935238 2560 flags.go:64] FLAG: --seccomp-default="false" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935241 2560 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935244 2560 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935247 2560 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935251 2560 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935254 2560 flags.go:64] FLAG: --storage-driver-password="root" Apr 16 17:40:31.940225 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935257 2560 flags.go:64] FLAG: --storage-driver-secure="false" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935259 2560 flags.go:64] FLAG: --storage-driver-table="stats" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935262 2560 flags.go:64] FLAG: --storage-driver-user="root" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935266 2560 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935269 2560 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935272 2560 flags.go:64] FLAG: --system-cgroups="" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935274 2560 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935280 2560 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935284 2560 flags.go:64] FLAG: --tls-cert-file="" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935287 2560 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935291 2560 flags.go:64] FLAG: --tls-min-version="" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935294 2560 flags.go:64] FLAG: --tls-private-key-file="" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935297 2560 flags.go:64] FLAG: --topology-manager-policy="none" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935300 2560 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935307 2560 flags.go:64] FLAG: --topology-manager-scope="container" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935310 2560 flags.go:64] FLAG: --v="2" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935314 2560 flags.go:64] FLAG: --version="false" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935318 2560 flags.go:64] FLAG: --vmodule="" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935322 2560 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.935325 2560 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935417 2560 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935421 2560 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935424 2560 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935427 2560 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 17:40:31.940855 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935430 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935433 2560 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935435 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935438 2560 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935441 2560 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935443 2560 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935446 2560 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935448 2560 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935451 2560 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935453 2560 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935456 2560 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935458 2560 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935463 2560 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935466 2560 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935469 2560 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935471 2560 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935475 2560 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935478 2560 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935480 2560 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935483 2560 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 17:40:31.941468 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935486 2560 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935488 2560 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935493 2560 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935495 2560 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935498 2560 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935501 2560 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935503 2560 feature_gate.go:328] unrecognized feature gate: Example Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935508 2560 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935511 2560 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935514 2560 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935517 2560 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935520 2560 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935523 2560 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935526 2560 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935528 2560 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935531 2560 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935534 2560 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935536 2560 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935539 2560 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 17:40:31.941991 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935542 2560 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935544 2560 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935547 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935549 2560 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935552 2560 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935554 2560 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935557 2560 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935559 2560 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935562 2560 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935567 2560 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935569 2560 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935572 2560 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935575 2560 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935577 2560 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935580 2560 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935583 2560 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935586 2560 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935589 2560 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935592 2560 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 17:40:31.942456 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935594 2560 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935597 2560 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935599 2560 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935602 2560 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935604 2560 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935607 2560 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935609 2560 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935612 2560 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935614 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935617 2560 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935619 2560 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935622 2560 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935625 2560 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935627 2560 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935630 2560 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935632 2560 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935635 2560 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935637 2560 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935639 2560 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935642 2560 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 17:40:31.942933 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935645 2560 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935648 2560 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935652 2560 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.935656 2560 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.936309 2560 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.942969 2560 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.942987 2560 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943038 2560 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943043 2560 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943046 2560 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943049 2560 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943052 2560 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943056 2560 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943058 2560 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943061 2560 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 17:40:31.943433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943064 2560 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943066 2560 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943069 2560 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943072 2560 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943074 2560 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943077 2560 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943080 2560 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943083 2560 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943086 2560 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943090 2560 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943095 2560 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943098 2560 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943101 2560 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943104 2560 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943107 2560 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943110 2560 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943113 2560 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943116 2560 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943120 2560 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 17:40:31.943816 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943122 2560 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943126 2560 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943129 2560 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943132 2560 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943137 2560 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943140 2560 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943142 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943145 2560 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943148 2560 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943150 2560 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943153 2560 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943156 2560 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943158 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943161 2560 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943163 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943166 2560 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943169 2560 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943171 2560 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943174 2560 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943176 2560 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 17:40:31.944290 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943179 2560 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943183 2560 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943186 2560 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943189 2560 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943191 2560 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943194 2560 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943197 2560 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943199 2560 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943202 2560 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943205 2560 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943207 2560 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943210 2560 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943213 2560 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943215 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943218 2560 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943221 2560 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943224 2560 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943228 2560 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943230 2560 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 17:40:31.944829 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943233 2560 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943235 2560 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943237 2560 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943240 2560 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943243 2560 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943245 2560 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943248 2560 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943250 2560 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943253 2560 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943255 2560 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943258 2560 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943260 2560 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943263 2560 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943265 2560 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943268 2560 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943270 2560 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943273 2560 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943275 2560 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943278 2560 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 17:40:31.945411 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943280 2560 feature_gate.go:328] unrecognized feature gate: Example Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.943286 2560 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943389 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943393 2560 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943396 2560 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943399 2560 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943402 2560 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943407 2560 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943410 2560 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943413 2560 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943416 2560 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943419 2560 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943422 2560 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943425 2560 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943428 2560 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943431 2560 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 17:40:31.945898 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943433 2560 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943436 2560 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943440 2560 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943444 2560 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943447 2560 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943450 2560 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943452 2560 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943455 2560 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943458 2560 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943460 2560 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943463 2560 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943465 2560 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943468 2560 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943470 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943473 2560 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943476 2560 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943478 2560 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943480 2560 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943483 2560 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 17:40:31.946301 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943485 2560 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943488 2560 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943490 2560 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943493 2560 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943496 2560 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943499 2560 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943501 2560 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943504 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943507 2560 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943509 2560 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943512 2560 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943515 2560 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943519 2560 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943522 2560 feature_gate.go:328] unrecognized feature gate: Example Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943524 2560 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943527 2560 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943530 2560 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943532 2560 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943535 2560 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943538 2560 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 17:40:31.946765 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943540 2560 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943543 2560 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943545 2560 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943548 2560 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943550 2560 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943553 2560 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943555 2560 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943558 2560 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943560 2560 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943563 2560 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943565 2560 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943568 2560 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943570 2560 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943573 2560 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943575 2560 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943579 2560 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943582 2560 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943584 2560 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943587 2560 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 17:40:31.947285 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943589 2560 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943592 2560 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943594 2560 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943597 2560 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943600 2560 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943603 2560 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943605 2560 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943608 2560 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943610 2560 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943613 2560 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943615 2560 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943618 2560 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943620 2560 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:31.943623 2560 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.943629 2560 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 17:40:31.947752 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.943738 2560 server.go:962] "Client rotation is on, will bootstrap in background" Apr 16 17:40:31.948139 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.945909 2560 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 16 17:40:31.948139 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.946952 2560 server.go:1019] "Starting client certificate rotation" Apr 16 17:40:31.948139 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.947061 2560 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 17:40:31.948139 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.947428 2560 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 17:40:31.969730 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.969707 2560 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 17:40:31.973052 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.972050 2560 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 17:40:31.989535 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.989513 2560 log.go:25] "Validated CRI v1 runtime API" Apr 16 17:40:31.994942 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.994926 2560 log.go:25] "Validated CRI v1 image API" Apr 16 17:40:31.996177 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:31.996161 2560 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 17:40:32.003046 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.003021 2560 fs.go:135] Filesystem UUIDs: map[774ba18a-ff1a-4a10-9582-31bf015d5c20:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2 c3418709-1c31-4d5b-8c22-4474b2688db4:/dev/nvme0n1p4] Apr 16 17:40:32.003106 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.003045 2560 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 16 17:40:32.005586 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.005563 2560 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 17:40:32.009161 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.009040 2560 manager.go:217] Machine: {Timestamp:2026-04-16 17:40:32.007118055 +0000 UTC m=+0.370326919 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3099857 MemoryCapacity:33164496896 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2b165fbbcf567c326a69d6db52efd1 SystemUUID:ec2b165f-bbcf-567c-326a-69d6db52efd1 BootID:b2725738-db5f-49b0-9e2a-4c6a030a3181 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632902656 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6098944 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582250496 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:2c:fa:fc:93:d3 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:2c:fa:fc:93:d3 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:5a:0b:81:4d:a2:45 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164496896 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 16 17:40:32.009161 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.009159 2560 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 16 17:40:32.009283 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.009252 2560 manager.go:233] Version: {KernelVersion:5.14.0-570.104.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260401-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 16 17:40:32.010371 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.010345 2560 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 17:40:32.010516 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.010374 2560 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-134-233.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 17:40:32.010564 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.010522 2560 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 17:40:32.010564 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.010532 2560 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 17:40:32.010564 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.010548 2560 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 17:40:32.011370 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.011359 2560 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 17:40:32.012764 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.012752 2560 state_mem.go:36] "Initialized new in-memory state store" Apr 16 17:40:32.012946 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.012936 2560 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 16 17:40:32.014846 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.014826 2560 kubelet.go:491] "Attempting to sync node with API server" Apr 16 17:40:32.014889 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.014858 2560 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 17:40:32.014889 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.014873 2560 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 16 17:40:32.014889 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.014883 2560 kubelet.go:397] "Adding apiserver pod source" Apr 16 17:40:32.015022 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.014896 2560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 17:40:32.015978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.015967 2560 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 17:40:32.016017 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.015985 2560 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 17:40:32.019407 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.019390 2560 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 16 17:40:32.021202 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.021187 2560 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 17:40:32.022783 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022760 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 16 17:40:32.022880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022790 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 16 17:40:32.022880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022800 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 16 17:40:32.022880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022817 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 16 17:40:32.022880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022826 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 16 17:40:32.022880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022852 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 16 17:40:32.022880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022862 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 16 17:40:32.022880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022870 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 16 17:40:32.022880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022881 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 16 17:40:32.023223 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022890 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 16 17:40:32.023223 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022905 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 16 17:40:32.023223 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.022918 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 16 17:40:32.024645 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.024623 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 16 17:40:32.024730 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.024680 2560 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 16 17:40:32.026618 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.026588 2560 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-134-233.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 17:40:32.026618 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.026606 2560 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 17:40:32.028724 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.028707 2560 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 17:40:32.028790 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.028750 2560 server.go:1295] "Started kubelet" Apr 16 17:40:32.028916 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.028887 2560 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 17:40:32.028990 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.028884 2560 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 17:40:32.028990 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.028959 2560 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 16 17:40:32.029316 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.029296 2560 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-134-233.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 16 17:40:32.029503 ip-10-0-134-233 systemd[1]: Started Kubernetes Kubelet. Apr 16 17:40:32.029776 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.029757 2560 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-2b89b" Apr 16 17:40:32.030131 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.030122 2560 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 17:40:32.031324 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.031310 2560 server.go:317] "Adding debug handlers to kubelet server" Apr 16 17:40:32.036004 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.035111 2560 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-134-233.ec2.internal.18a6e72103d8e91b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-134-233.ec2.internal,UID:ip-10-0-134-233.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-134-233.ec2.internal,},FirstTimestamp:2026-04-16 17:40:32.028723483 +0000 UTC m=+0.391932348,LastTimestamp:2026-04-16 17:40:32.028723483 +0000 UTC m=+0.391932348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-134-233.ec2.internal,}" Apr 16 17:40:32.036823 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.036807 2560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 17:40:32.036932 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.036825 2560 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 16 17:40:32.037420 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.037402 2560 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 17:40:32.037585 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.037567 2560 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 16 17:40:32.037661 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.037588 2560 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 17:40:32.037745 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.037727 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.037848 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.037810 2560 reconstruct.go:97] "Volume reconstruction finished" Apr 16 17:40:32.037950 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.037894 2560 reconciler.go:26] "Reconciler: start to sync state" Apr 16 17:40:32.038898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.038060 2560 factory.go:153] Registering CRI-O factory Apr 16 17:40:32.038898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.038081 2560 factory.go:223] Registration of the crio container factory successfully Apr 16 17:40:32.038898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.038167 2560 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 16 17:40:32.038898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.038179 2560 factory.go:55] Registering systemd factory Apr 16 17:40:32.038898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.038187 2560 factory.go:223] Registration of the systemd container factory successfully Apr 16 17:40:32.038898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.038211 2560 factory.go:103] Registering Raw factory Apr 16 17:40:32.038898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.038225 2560 manager.go:1196] Started watching for new ooms in manager Apr 16 17:40:32.039254 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.039003 2560 manager.go:319] Starting recovery of all containers Apr 16 17:40:32.040142 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.040119 2560 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-2b89b" Apr 16 17:40:32.040240 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.040194 2560 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 16 17:40:32.042986 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.042950 2560 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 17:40:32.043061 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.042988 2560 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-134-233.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 16 17:40:32.048505 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.048488 2560 manager.go:324] Recovery completed Apr 16 17:40:32.054460 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.054349 2560 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:40:32.057207 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.057192 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:40:32.057271 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.057220 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:40:32.057271 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.057232 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:40:32.057690 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.057675 2560 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 16 17:40:32.057690 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.057687 2560 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 16 17:40:32.057768 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.057702 2560 state_mem.go:36] "Initialized new in-memory state store" Apr 16 17:40:32.059964 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.059947 2560 policy_none.go:49] "None policy: Start" Apr 16 17:40:32.060035 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.059968 2560 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 17:40:32.060035 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.059982 2560 state_mem.go:35] "Initializing new in-memory state store" Apr 16 17:40:32.097511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.095452 2560 manager.go:341] "Starting Device Plugin manager" Apr 16 17:40:32.097511 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.095493 2560 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 17:40:32.097511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.095504 2560 server.go:85] "Starting device plugin registration server" Apr 16 17:40:32.097511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.095776 2560 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 17:40:32.097511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.095788 2560 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 17:40:32.097511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.095920 2560 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 16 17:40:32.097511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.096013 2560 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 16 17:40:32.097511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.096021 2560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 17:40:32.097511 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.096621 2560 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 16 17:40:32.097511 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.096655 2560 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.155881 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.155846 2560 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 17:40:32.157046 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.157030 2560 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 17:40:32.157120 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.157056 2560 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 17:40:32.157120 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.157078 2560 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 17:40:32.157120 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.157087 2560 kubelet.go:2451] "Starting kubelet main sync loop" Apr 16 17:40:32.157252 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.157125 2560 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 16 17:40:32.161309 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.161286 2560 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:40:32.196664 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.196613 2560 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:40:32.197487 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.197472 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:40:32.197572 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.197504 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:40:32.197572 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.197517 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:40:32.197572 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.197540 2560 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.206167 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.206147 2560 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.206253 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.206176 2560 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-134-233.ec2.internal\": node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.222897 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.222875 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.257234 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.257201 2560 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal"] Apr 16 17:40:32.257365 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.257280 2560 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:40:32.259018 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.259000 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:40:32.259093 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.259034 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:40:32.259093 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.259045 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:40:32.260221 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.260209 2560 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:40:32.260845 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.260818 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.260936 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.260869 2560 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:40:32.260936 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.260884 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:40:32.260936 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.260906 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:40:32.260936 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.260920 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:40:32.261496 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.261483 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:40:32.261546 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.261508 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:40:32.261546 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.261518 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:40:32.262466 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.262453 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.262514 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.262478 2560 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 17:40:32.263624 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.263606 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientMemory" Apr 16 17:40:32.263709 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.263637 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 17:40:32.263709 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.263649 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeHasSufficientPID" Apr 16 17:40:32.282391 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.282374 2560 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-134-233.ec2.internal\" not found" node="ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.286431 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.286416 2560 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-134-233.ec2.internal\" not found" node="ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.323681 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.323660 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.339050 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.339027 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/9f86db7b6fd9ec4db0b09003244343a1-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal\" (UID: \"9f86db7b6fd9ec4db0b09003244343a1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.339138 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.339053 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f86db7b6fd9ec4db0b09003244343a1-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal\" (UID: \"9f86db7b6fd9ec4db0b09003244343a1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.339138 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.339077 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/dbf1483115d1b5ae94f569f1ec8a827f-config\") pod \"kube-apiserver-proxy-ip-10-0-134-233.ec2.internal\" (UID: \"dbf1483115d1b5ae94f569f1ec8a827f\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.424719 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.424691 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.440090 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.440070 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/9f86db7b6fd9ec4db0b09003244343a1-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal\" (UID: \"9f86db7b6fd9ec4db0b09003244343a1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.440160 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.440098 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f86db7b6fd9ec4db0b09003244343a1-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal\" (UID: \"9f86db7b6fd9ec4db0b09003244343a1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.440160 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.440118 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/dbf1483115d1b5ae94f569f1ec8a827f-config\") pod \"kube-apiserver-proxy-ip-10-0-134-233.ec2.internal\" (UID: \"dbf1483115d1b5ae94f569f1ec8a827f\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.440248 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.440168 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f86db7b6fd9ec4db0b09003244343a1-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal\" (UID: \"9f86db7b6fd9ec4db0b09003244343a1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.440248 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.440187 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/dbf1483115d1b5ae94f569f1ec8a827f-config\") pod \"kube-apiserver-proxy-ip-10-0-134-233.ec2.internal\" (UID: \"dbf1483115d1b5ae94f569f1ec8a827f\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.440248 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.440168 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/9f86db7b6fd9ec4db0b09003244343a1-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal\" (UID: \"9f86db7b6fd9ec4db0b09003244343a1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.525565 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.525494 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.586034 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.586000 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.589642 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.589320 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal" Apr 16 17:40:32.626558 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.626527 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.727032 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.726998 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.827630 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.827558 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.864313 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.864282 2560 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:40:32.928561 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:32.928530 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:32.947017 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.946989 2560 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 16 17:40:32.947136 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.947115 2560 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 17:40:32.947174 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.947150 2560 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 17:40:32.982369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:32.982347 2560 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:40:33.029567 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:33.029534 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:33.037348 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.037317 2560 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 16 17:40:33.042073 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.042040 2560 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-15 17:35:32 +0000 UTC" deadline="2027-10-31 10:03:27.292553493 +0000 UTC" Apr 16 17:40:33.042073 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.042071 2560 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="13504h22m54.250485004s" Apr 16 17:40:33.049213 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.049184 2560 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 17:40:33.067673 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.067651 2560 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-pq62t" Apr 16 17:40:33.075981 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.075962 2560 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-pq62t" Apr 16 17:40:33.125669 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:33.125636 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf1483115d1b5ae94f569f1ec8a827f.slice/crio-b24100147a574ae0498897741dea274657b415175ce1d67544071a2e9ed4493d WatchSource:0}: Error finding container b24100147a574ae0498897741dea274657b415175ce1d67544071a2e9ed4493d: Status 404 returned error can't find the container with id b24100147a574ae0498897741dea274657b415175ce1d67544071a2e9ed4493d Apr 16 17:40:33.126050 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:33.126030 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f86db7b6fd9ec4db0b09003244343a1.slice/crio-b940f52bbf2345aede8c2c0d5311d10ef55852ec7423e070153186fdcdf02141 WatchSource:0}: Error finding container b940f52bbf2345aede8c2c0d5311d10ef55852ec7423e070153186fdcdf02141: Status 404 returned error can't find the container with id b940f52bbf2345aede8c2c0d5311d10ef55852ec7423e070153186fdcdf02141 Apr 16 17:40:33.130632 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:33.130597 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:33.131288 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.130979 2560 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 17:40:33.160431 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.160392 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal" event={"ID":"dbf1483115d1b5ae94f569f1ec8a827f","Type":"ContainerStarted","Data":"b24100147a574ae0498897741dea274657b415175ce1d67544071a2e9ed4493d"} Apr 16 17:40:33.161323 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.161303 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" event={"ID":"9f86db7b6fd9ec4db0b09003244343a1","Type":"ContainerStarted","Data":"b940f52bbf2345aede8c2c0d5311d10ef55852ec7423e070153186fdcdf02141"} Apr 16 17:40:33.231510 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:33.231490 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:33.331969 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:33.331939 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:33.432483 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:33.432413 2560 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-134-233.ec2.internal\" not found" Apr 16 17:40:33.496736 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.496712 2560 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:40:33.537680 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.537656 2560 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" Apr 16 17:40:33.550177 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.550155 2560 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 17:40:33.552478 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.552460 2560 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal" Apr 16 17:40:33.559106 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:33.559091 2560 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 17:40:34.015970 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.015892 2560 apiserver.go:52] "Watching apiserver" Apr 16 17:40:34.023652 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.023616 2560 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 16 17:40:34.024078 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.024050 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lx5nt","openshift-ovn-kubernetes/ovnkube-node-94t5h","kube-system/konnectivity-agent-nppj4","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9","openshift-image-registry/node-ca-m85xh","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal","openshift-network-diagnostics/network-check-target-cszgd","openshift-network-operator/iptables-alerter-zmp7s","kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal","openshift-cluster-node-tuning-operator/tuned-vn696","openshift-dns/node-resolver-lh7ql","openshift-multus/multus-additional-cni-plugins-6fzxn","openshift-multus/multus-trqz7"] Apr 16 17:40:34.027330 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.027306 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:34.027456 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.027403 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:34.029590 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.029567 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.031761 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.031739 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:34.032978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.032281 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 16 17:40:34.032978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.032314 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 16 17:40:34.032978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.032490 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 16 17:40:34.032978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.032523 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 16 17:40:34.032978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.032677 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 16 17:40:34.032978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.032714 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-9zc72\"" Apr 16 17:40:34.032978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.032862 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 16 17:40:34.033951 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.033933 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-xm98m\"" Apr 16 17:40:34.033951 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.033933 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 16 17:40:34.034094 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.034054 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 16 17:40:34.036253 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.036234 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.038795 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.038207 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 16 17:40:34.038795 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.038266 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-4n2dn\"" Apr 16 17:40:34.038795 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.038213 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 16 17:40:34.038795 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.038393 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 16 17:40:34.038795 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.038510 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.038795 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.038569 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:34.038795 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.038628 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:34.040213 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.040194 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 16 17:40:34.040309 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.040255 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 16 17:40:34.040639 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.040442 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-dqnd6\"" Apr 16 17:40:34.040639 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.040448 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 16 17:40:34.043953 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.043933 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.044097 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.044078 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.046980 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.045916 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 16 17:40:34.046980 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.045948 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-k5ft7\"" Apr 16 17:40:34.046980 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.046043 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 16 17:40:34.046980 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.046148 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 16 17:40:34.046980 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.046172 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 16 17:40:34.046980 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.046432 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 16 17:40:34.046980 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.046635 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-lwjrt\"" Apr 16 17:40:34.048929 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.048910 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.049021 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.048937 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-sys-fs\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.049021 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.048965 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-run-netns\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049021 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.048991 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-run-systemd\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049021 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049013 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-var-lib-openvswitch\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049216 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049038 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-registration-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.049216 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049080 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-device-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.049216 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049104 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-slash\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049216 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049129 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-etc-selinux\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.049216 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049156 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hs7r\" (UniqueName: \"kubernetes.io/projected/7fba2eed-4c37-4bcf-a44a-229c6669c76f-kube-api-access-6hs7r\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.049216 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049182 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-node-log\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049216 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049207 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-log-socket\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049229 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-run-openvswitch\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049253 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-ovnkube-config\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049277 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/739c3ada-3675-42f5-afa7-51eba65d8c7e-agent-certs\") pod \"konnectivity-agent-nppj4\" (UID: \"739c3ada-3675-42f5-afa7-51eba65d8c7e\") " pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049300 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-socket-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049324 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/455194d4-fde7-420d-8c1d-1e43000eb0a3-serviceca\") pod \"node-ca-m85xh\" (UID: \"455194d4-fde7-420d-8c1d-1e43000eb0a3\") " pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049371 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-cni-netd\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049400 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049426 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-env-overrides\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049484 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-ovn-node-metrics-cert\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049508 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-kubelet-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049530 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/455194d4-fde7-420d-8c1d-1e43000eb0a3-host\") pod \"node-ca-m85xh\" (UID: \"455194d4-fde7-420d-8c1d-1e43000eb0a3\") " pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.049584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049559 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6984\" (UniqueName: \"kubernetes.io/projected/455194d4-fde7-420d-8c1d-1e43000eb0a3-kube-api-access-v6984\") pod \"node-ca-m85xh\" (UID: \"455194d4-fde7-420d-8c1d-1e43000eb0a3\") " pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.050153 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049625 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-run-ovn-kubernetes\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.050153 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049665 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-ovnkube-script-lib\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.050153 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049715 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:34.050153 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049757 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc6cv\" (UniqueName: \"kubernetes.io/projected/47012ffa-3deb-41b8-b770-fc4db562d87e-kube-api-access-lc6cv\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:34.050153 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049793 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghmd\" (UniqueName: \"kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd\") pod \"network-check-target-cszgd\" (UID: \"29dc29ef-4848-44b6-bfa3-4a7545e874ce\") " pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:34.050153 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049823 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-kubelet\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.050153 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049890 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-etc-openvswitch\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.050153 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.049924 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-cni-bin\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.050530 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.050325 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/739c3ada-3675-42f5-afa7-51eba65d8c7e-konnectivity-ca\") pod \"konnectivity-agent-nppj4\" (UID: \"739c3ada-3675-42f5-afa7-51eba65d8c7e\") " pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:34.050530 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.050385 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-systemd-units\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.050700 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.050665 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-run-ovn\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.050780 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.050756 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5qdz\" (UniqueName: \"kubernetes.io/projected/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-kube-api-access-v5qdz\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.050852 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.050822 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 16 17:40:34.050907 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.050882 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-8m9jf\"" Apr 16 17:40:34.050963 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.050824 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 16 17:40:34.051360 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.051342 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.053473 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.053456 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 16 17:40:34.053592 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.053575 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 16 17:40:34.053685 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.053666 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-l7cw4\"" Apr 16 17:40:34.053926 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.053903 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.054090 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.054074 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 16 17:40:34.054270 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.054228 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 16 17:40:34.054661 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.054440 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 16 17:40:34.055744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.055716 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-dx7fm\"" Apr 16 17:40:34.055846 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.055769 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 16 17:40:34.076875 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.076849 2560 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 17:35:33 +0000 UTC" deadline="2027-12-28 04:49:37.31829161 +0000 UTC" Apr 16 17:40:34.076985 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.076876 2560 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14891h9m3.241419431s" Apr 16 17:40:34.138343 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.138318 2560 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 17:40:34.151041 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151019 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-kubelet\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.151178 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151047 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-cni-bin\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.151178 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151073 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/739c3ada-3675-42f5-afa7-51eba65d8c7e-konnectivity-ca\") pod \"konnectivity-agent-nppj4\" (UID: \"739c3ada-3675-42f5-afa7-51eba65d8c7e\") " pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:34.151178 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151101 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-cnibin\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.151178 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151118 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-sys\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.151178 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151147 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-run-ovn\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.151178 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151151 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-cni-bin\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.151178 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151145 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-kubelet\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151189 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-sys-fs\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151207 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-run-ovn\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151235 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e-tmp-dir\") pod \"node-resolver-lh7ql\" (UID: \"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e\") " pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151255 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-multus-cni-dir\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151281 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-var-lib-cni-bin\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151305 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkl2v\" (UniqueName: \"kubernetes.io/projected/f4e77261-e614-4a80-bbc5-28200547728b-kube-api-access-mkl2v\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151403 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/5d3784fe-3481-43df-9a89-dda624c566b8-iptables-alerter-script\") pod \"iptables-alerter-zmp7s\" (UID: \"5d3784fe-3481-43df-9a89-dda624c566b8\") " pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151413 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-sys-fs\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151444 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-run-systemd\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151475 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-var-lib-openvswitch\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.151506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151501 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-registration-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151527 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-device-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151538 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-run-systemd\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151554 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-var-lib-openvswitch\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151556 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhz5w\" (UniqueName: \"kubernetes.io/projected/5d3784fe-3481-43df-9a89-dda624c566b8-kube-api-access-bhz5w\") pod \"iptables-alerter-zmp7s\" (UID: \"5d3784fe-3481-43df-9a89-dda624c566b8\") " pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151569 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-registration-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151592 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-modprobe-d\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151583 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-device-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151620 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-sysctl-conf\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151645 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-systemd\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151673 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hs7r\" (UniqueName: \"kubernetes.io/projected/7fba2eed-4c37-4bcf-a44a-229c6669c76f-kube-api-access-6hs7r\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151700 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-os-release\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151726 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6b045243-0c99-4991-8719-5efd0f27a340-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151751 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/6b045243-0c99-4991-8719-5efd0f27a340-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151777 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-run-k8s-cni-cncf-io\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151804 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-run-netns\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.151880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151846 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-node-log\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151874 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xwzs\" (UniqueName: \"kubernetes.io/projected/6b045243-0c99-4991-8719-5efd0f27a340-kube-api-access-5xwzs\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151899 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/55b71eac-1a4c-4273-baac-ae7691dbf264-tmp\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151925 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-run-openvswitch\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151935 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-node-log\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151974 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/739c3ada-3675-42f5-afa7-51eba65d8c7e-agent-certs\") pod \"konnectivity-agent-nppj4\" (UID: \"739c3ada-3675-42f5-afa7-51eba65d8c7e\") " pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.151977 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-run-openvswitch\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152012 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-socket-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152055 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-tuned\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152089 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-cni-netd\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152110 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/455194d4-fde7-420d-8c1d-1e43000eb0a3-host\") pod \"node-ca-m85xh\" (UID: \"455194d4-fde7-420d-8c1d-1e43000eb0a3\") " pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152137 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6984\" (UniqueName: \"kubernetes.io/projected/455194d4-fde7-420d-8c1d-1e43000eb0a3-kube-api-access-v6984\") pod \"node-ca-m85xh\" (UID: \"455194d4-fde7-420d-8c1d-1e43000eb0a3\") " pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152142 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-socket-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152165 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152187 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-cni-netd\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152191 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-var-lib-kubelet\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152230 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-host\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.152587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152272 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/455194d4-fde7-420d-8c1d-1e43000eb0a3-host\") pod \"node-ca-m85xh\" (UID: \"455194d4-fde7-420d-8c1d-1e43000eb0a3\") " pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152306 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpk7l\" (UniqueName: \"kubernetes.io/projected/d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e-kube-api-access-cpk7l\") pod \"node-resolver-lh7ql\" (UID: \"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e\") " pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152333 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f4e77261-e614-4a80-bbc5-28200547728b-multus-daemon-config\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152359 2560 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152361 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-sysctl-d\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152408 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-etc-openvswitch\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152447 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b045243-0c99-4991-8719-5efd0f27a340-cni-binary-copy\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152477 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f4e77261-e614-4a80-bbc5-28200547728b-cni-binary-copy\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152491 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-etc-openvswitch\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152497 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/739c3ada-3675-42f5-afa7-51eba65d8c7e-konnectivity-ca\") pod \"konnectivity-agent-nppj4\" (UID: \"739c3ada-3675-42f5-afa7-51eba65d8c7e\") " pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152503 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-multus-socket-dir-parent\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152530 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-run-multus-certs\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152577 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-etc-kubernetes\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152599 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5d3784fe-3481-43df-9a89-dda624c566b8-host-slash\") pod \"iptables-alerter-zmp7s\" (UID: \"5d3784fe-3481-43df-9a89-dda624c566b8\") " pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152621 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-lib-modules\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152654 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-systemd-units\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152670 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5qdz\" (UniqueName: \"kubernetes.io/projected/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-kube-api-access-v5qdz\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.153369 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152696 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-system-cni-dir\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152715 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-systemd-units\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152724 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-var-lib-kubelet\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152763 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-run-netns\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152792 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-os-release\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152818 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-hostroot\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152858 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-multus-conf-dir\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152858 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-run-netns\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152887 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-slash\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152922 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-slash\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152922 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-etc-selinux\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152956 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e-hosts-file\") pod \"node-resolver-lh7ql\" (UID: \"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e\") " pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.152986 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-system-cni-dir\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153012 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-etc-selinux\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153013 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-var-lib-cni-multus\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153049 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-run\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153083 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-log-socket\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153129 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-log-socket\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153154 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-ovnkube-config\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153185 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/455194d4-fde7-420d-8c1d-1e43000eb0a3-serviceca\") pod \"node-ca-m85xh\" (UID: \"455194d4-fde7-420d-8c1d-1e43000eb0a3\") " pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153215 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-kubernetes\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153244 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153273 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-env-overrides\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153297 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-ovn-node-metrics-cert\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153320 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-kubelet-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153330 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153344 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lc6cv\" (UniqueName: \"kubernetes.io/projected/47012ffa-3deb-41b8-b770-fc4db562d87e-kube-api-access-lc6cv\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153372 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-cnibin\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153398 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-sysconfig\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153422 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-run-ovn-kubernetes\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153447 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-ovnkube-script-lib\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153471 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153496 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sghmd\" (UniqueName: \"kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd\") pod \"network-check-target-cszgd\" (UID: \"29dc29ef-4848-44b6-bfa3-4a7545e874ce\") " pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153521 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwcj8\" (UniqueName: \"kubernetes.io/projected/55b71eac-1a4c-4273-baac-ae7691dbf264-kube-api-access-mwcj8\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.154726 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153604 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/455194d4-fde7-420d-8c1d-1e43000eb0a3-serviceca\") pod \"node-ca-m85xh\" (UID: \"455194d4-fde7-420d-8c1d-1e43000eb0a3\") " pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.155430 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153627 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-host-run-ovn-kubernetes\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.155430 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.153712 2560 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:34.155430 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153754 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-ovnkube-config\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.155430 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.153812 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs podName:47012ffa-3deb-41b8-b770-fc4db562d87e nodeName:}" failed. No retries permitted until 2026-04-16 17:40:34.653763459 +0000 UTC m=+3.016972324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs") pod "network-metrics-daemon-lx5nt" (UID: "47012ffa-3deb-41b8-b770-fc4db562d87e") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:34.155430 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.153815 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fba2eed-4c37-4bcf-a44a-229c6669c76f-kubelet-dir\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.155430 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.154361 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-env-overrides\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.155430 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.154610 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-ovnkube-script-lib\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.156381 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.156353 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/739c3ada-3675-42f5-afa7-51eba65d8c7e-agent-certs\") pod \"konnectivity-agent-nppj4\" (UID: \"739c3ada-3675-42f5-afa7-51eba65d8c7e\") " pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:34.156855 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.156819 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-ovn-node-metrics-cert\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.162438 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.162417 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5qdz\" (UniqueName: \"kubernetes.io/projected/5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638-kube-api-access-v5qdz\") pod \"ovnkube-node-94t5h\" (UID: \"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638\") " pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.162656 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.162633 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:40:34.162656 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.162656 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:40:34.162806 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.162668 2560 projected.go:194] Error preparing data for projected volume kube-api-access-sghmd for pod openshift-network-diagnostics/network-check-target-cszgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:34.162806 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.162721 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd podName:29dc29ef-4848-44b6-bfa3-4a7545e874ce nodeName:}" failed. No retries permitted until 2026-04-16 17:40:34.662704531 +0000 UTC m=+3.025913404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sghmd" (UniqueName: "kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd") pod "network-check-target-cszgd" (UID: "29dc29ef-4848-44b6-bfa3-4a7545e874ce") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:34.165267 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.165247 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6984\" (UniqueName: \"kubernetes.io/projected/455194d4-fde7-420d-8c1d-1e43000eb0a3-kube-api-access-v6984\") pod \"node-ca-m85xh\" (UID: \"455194d4-fde7-420d-8c1d-1e43000eb0a3\") " pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.166549 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.166527 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc6cv\" (UniqueName: \"kubernetes.io/projected/47012ffa-3deb-41b8-b770-fc4db562d87e-kube-api-access-lc6cv\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:34.166937 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.166912 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hs7r\" (UniqueName: \"kubernetes.io/projected/7fba2eed-4c37-4bcf-a44a-229c6669c76f-kube-api-access-6hs7r\") pod \"aws-ebs-csi-driver-node-885h9\" (UID: \"7fba2eed-4c37-4bcf-a44a-229c6669c76f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.253898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.253867 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-kubernetes\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.253898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.253903 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-cnibin\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.254132 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.253924 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-sysconfig\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.254132 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.253990 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-kubernetes\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.254132 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254017 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mwcj8\" (UniqueName: \"kubernetes.io/projected/55b71eac-1a4c-4273-baac-ae7691dbf264-kube-api-access-mwcj8\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.254132 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254044 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-cnibin\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.254132 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254050 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-cnibin\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.254132 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254070 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-sys\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.254132 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254081 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-sysconfig\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.254132 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254107 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e-tmp-dir\") pod \"node-resolver-lh7ql\" (UID: \"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e\") " pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.254132 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254130 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-multus-cni-dir\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254138 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-sys\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254144 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-cnibin\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254154 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-var-lib-cni-bin\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254181 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-var-lib-cni-bin\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254206 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mkl2v\" (UniqueName: \"kubernetes.io/projected/f4e77261-e614-4a80-bbc5-28200547728b-kube-api-access-mkl2v\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254237 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/5d3784fe-3481-43df-9a89-dda624c566b8-iptables-alerter-script\") pod \"iptables-alerter-zmp7s\" (UID: \"5d3784fe-3481-43df-9a89-dda624c566b8\") " pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254267 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bhz5w\" (UniqueName: \"kubernetes.io/projected/5d3784fe-3481-43df-9a89-dda624c566b8-kube-api-access-bhz5w\") pod \"iptables-alerter-zmp7s\" (UID: \"5d3784fe-3481-43df-9a89-dda624c566b8\") " pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254294 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-modprobe-d\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254318 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-sysctl-conf\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254317 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-multus-cni-dir\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254340 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-systemd\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254368 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-os-release\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254393 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6b045243-0c99-4991-8719-5efd0f27a340-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254422 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/6b045243-0c99-4991-8719-5efd0f27a340-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254431 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-modprobe-d\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254436 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e-tmp-dir\") pod \"node-resolver-lh7ql\" (UID: \"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e\") " pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.254563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254448 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-run-k8s-cni-cncf-io\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254474 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-run-netns\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254500 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5xwzs\" (UniqueName: \"kubernetes.io/projected/6b045243-0c99-4991-8719-5efd0f27a340-kube-api-access-5xwzs\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254524 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/55b71eac-1a4c-4273-baac-ae7691dbf264-tmp\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254552 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-tuned\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254582 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254589 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-sysctl-conf\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254608 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-var-lib-kubelet\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254501 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-os-release\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254634 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-host\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254627 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-systemd\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254701 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-var-lib-kubelet\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254699 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cpk7l\" (UniqueName: \"kubernetes.io/projected/d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e-kube-api-access-cpk7l\") pod \"node-resolver-lh7ql\" (UID: \"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e\") " pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254733 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-run-netns\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254744 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f4e77261-e614-4a80-bbc5-28200547728b-multus-daemon-config\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254762 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-run-k8s-cni-cncf-io\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254772 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-sysctl-d\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254803 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b045243-0c99-4991-8719-5efd0f27a340-cni-binary-copy\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.255179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254808 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-host\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.254973 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/6b045243-0c99-4991-8719-5efd0f27a340-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255011 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-sysctl-d\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255013 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f4e77261-e614-4a80-bbc5-28200547728b-cni-binary-copy\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255059 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-multus-socket-dir-parent\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255118 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-run-multus-certs\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255145 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-etc-kubernetes\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255171 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5d3784fe-3481-43df-9a89-dda624c566b8-host-slash\") pod \"iptables-alerter-zmp7s\" (UID: \"5d3784fe-3481-43df-9a89-dda624c566b8\") " pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255195 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-lib-modules\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255222 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-system-cni-dir\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255247 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-var-lib-kubelet\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255273 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-os-release\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255296 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-hostroot\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255319 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-multus-conf-dir\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255327 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b045243-0c99-4991-8719-5efd0f27a340-cni-binary-copy\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255347 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e-hosts-file\") pod \"node-resolver-lh7ql\" (UID: \"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e\") " pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255351 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f4e77261-e614-4a80-bbc5-28200547728b-multus-daemon-config\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255350 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.256003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255376 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-system-cni-dir\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255403 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-var-lib-cni-multus\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255412 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-multus-socket-dir-parent\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255413 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-run-multus-certs\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255430 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-run\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255461 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-lib-modules\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255469 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-hostroot\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255471 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f4e77261-e614-4a80-bbc5-28200547728b-cni-binary-copy\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255513 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-run\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255519 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e-hosts-file\") pod \"node-resolver-lh7ql\" (UID: \"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e\") " pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255534 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-system-cni-dir\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255558 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b045243-0c99-4991-8719-5efd0f27a340-system-cni-dir\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255563 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-multus-conf-dir\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255538 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-etc-kubernetes\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255574 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5d3784fe-3481-43df-9a89-dda624c566b8-host-slash\") pod \"iptables-alerter-zmp7s\" (UID: \"5d3784fe-3481-43df-9a89-dda624c566b8\") " pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255580 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-host-var-lib-cni-multus\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255616 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55b71eac-1a4c-4273-baac-ae7691dbf264-var-lib-kubelet\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255621 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f4e77261-e614-4a80-bbc5-28200547728b-os-release\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.256867 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255653 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/5d3784fe-3481-43df-9a89-dda624c566b8-iptables-alerter-script\") pod \"iptables-alerter-zmp7s\" (UID: \"5d3784fe-3481-43df-9a89-dda624c566b8\") " pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.257568 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.255792 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6b045243-0c99-4991-8719-5efd0f27a340-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.257568 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.257227 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/55b71eac-1a4c-4273-baac-ae7691dbf264-tmp\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.257568 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.257328 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/55b71eac-1a4c-4273-baac-ae7691dbf264-etc-tuned\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.266727 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.266667 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpk7l\" (UniqueName: \"kubernetes.io/projected/d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e-kube-api-access-cpk7l\") pod \"node-resolver-lh7ql\" (UID: \"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e\") " pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.267333 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.267303 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwcj8\" (UniqueName: \"kubernetes.io/projected/55b71eac-1a4c-4273-baac-ae7691dbf264-kube-api-access-mwcj8\") pod \"tuned-vn696\" (UID: \"55b71eac-1a4c-4273-baac-ae7691dbf264\") " pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.270637 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.270614 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xwzs\" (UniqueName: \"kubernetes.io/projected/6b045243-0c99-4991-8719-5efd0f27a340-kube-api-access-5xwzs\") pod \"multus-additional-cni-plugins-6fzxn\" (UID: \"6b045243-0c99-4991-8719-5efd0f27a340\") " pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.271187 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.271169 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhz5w\" (UniqueName: \"kubernetes.io/projected/5d3784fe-3481-43df-9a89-dda624c566b8-kube-api-access-bhz5w\") pod \"iptables-alerter-zmp7s\" (UID: \"5d3784fe-3481-43df-9a89-dda624c566b8\") " pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.271542 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.271514 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkl2v\" (UniqueName: \"kubernetes.io/projected/f4e77261-e614-4a80-bbc5-28200547728b-kube-api-access-mkl2v\") pod \"multus-trqz7\" (UID: \"f4e77261-e614-4a80-bbc5-28200547728b\") " pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.303260 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.303231 2560 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:40:34.343141 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.343109 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:34.353898 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.353863 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:34.362645 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.362623 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" Apr 16 17:40:34.369244 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.369225 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m85xh" Apr 16 17:40:34.377789 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.377770 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-zmp7s" Apr 16 17:40:34.384366 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.384345 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-vn696" Apr 16 17:40:34.392908 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.392892 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lh7ql" Apr 16 17:40:34.400423 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.400405 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" Apr 16 17:40:34.407017 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.406994 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-trqz7" Apr 16 17:40:34.658159 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.658077 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:34.658302 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.658214 2560 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:34.658302 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.658282 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs podName:47012ffa-3deb-41b8-b770-fc4db562d87e nodeName:}" failed. No retries permitted until 2026-04-16 17:40:35.658262581 +0000 UTC m=+4.021471437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs") pod "network-metrics-daemon-lx5nt" (UID: "47012ffa-3deb-41b8-b770-fc4db562d87e") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:34.759044 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:34.759007 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sghmd\" (UniqueName: \"kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd\") pod \"network-check-target-cszgd\" (UID: \"29dc29ef-4848-44b6-bfa3-4a7545e874ce\") " pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:34.759227 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.759196 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:40:34.759227 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.759218 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:40:34.759312 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.759248 2560 projected.go:194] Error preparing data for projected volume kube-api-access-sghmd for pod openshift-network-diagnostics/network-check-target-cszgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:34.759349 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:34.759312 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd podName:29dc29ef-4848-44b6-bfa3-4a7545e874ce nodeName:}" failed. No retries permitted until 2026-04-16 17:40:35.759292853 +0000 UTC m=+4.122501710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sghmd" (UniqueName: "kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd") pod "network-check-target-cszgd" (UID: "29dc29ef-4848-44b6-bfa3-4a7545e874ce") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:34.867222 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:34.867147 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd991b7f2_6a61_4e7d_aa21_5a9bfbd3542e.slice/crio-d4a4720cdcfb77f6f408d7ca417f690c797aedad2795d630d690032dea42a97a WatchSource:0}: Error finding container d4a4720cdcfb77f6f408d7ca417f690c797aedad2795d630d690032dea42a97a: Status 404 returned error can't find the container with id d4a4720cdcfb77f6f408d7ca417f690c797aedad2795d630d690032dea42a97a Apr 16 17:40:34.868048 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:34.868016 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4e77261_e614_4a80_bbc5_28200547728b.slice/crio-00122421d30c0342bd85bfaa3e1668d4f48477abe41b320b9fed95aa5a5c2973 WatchSource:0}: Error finding container 00122421d30c0342bd85bfaa3e1668d4f48477abe41b320b9fed95aa5a5c2973: Status 404 returned error can't find the container with id 00122421d30c0342bd85bfaa3e1668d4f48477abe41b320b9fed95aa5a5c2973 Apr 16 17:40:34.871074 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:34.871053 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod739c3ada_3675_42f5_afa7_51eba65d8c7e.slice/crio-d6bfd53cfd7e085f016a7e0c5a86dc8007b504faa6bb6858c4da607e75cb321b WatchSource:0}: Error finding container d6bfd53cfd7e085f016a7e0c5a86dc8007b504faa6bb6858c4da607e75cb321b: Status 404 returned error can't find the container with id d6bfd53cfd7e085f016a7e0c5a86dc8007b504faa6bb6858c4da607e75cb321b Apr 16 17:40:34.873202 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:34.873180 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a9c6eb5_bf25_4c2f_aa81_042b6e2bd638.slice/crio-4036409cc115a4a059ed8f0cf3c2fbb0c2509448199fa749e58adaa5383dd047 WatchSource:0}: Error finding container 4036409cc115a4a059ed8f0cf3c2fbb0c2509448199fa749e58adaa5383dd047: Status 404 returned error can't find the container with id 4036409cc115a4a059ed8f0cf3c2fbb0c2509448199fa749e58adaa5383dd047 Apr 16 17:40:34.874731 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:34.874699 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d3784fe_3481_43df_9a89_dda624c566b8.slice/crio-3d855d8b6d88846c5ab9ee973ed6d48b22127b40564dd630bdb5bc8202815986 WatchSource:0}: Error finding container 3d855d8b6d88846c5ab9ee973ed6d48b22127b40564dd630bdb5bc8202815986: Status 404 returned error can't find the container with id 3d855d8b6d88846c5ab9ee973ed6d48b22127b40564dd630bdb5bc8202815986 Apr 16 17:40:34.876195 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:34.875916 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55b71eac_1a4c_4273_baac_ae7691dbf264.slice/crio-69f9969b261906ad3794c2f88dffdb6166e3f3f84b36fa25bceed2445e96e41d WatchSource:0}: Error finding container 69f9969b261906ad3794c2f88dffdb6166e3f3f84b36fa25bceed2445e96e41d: Status 404 returned error can't find the container with id 69f9969b261906ad3794c2f88dffdb6166e3f3f84b36fa25bceed2445e96e41d Apr 16 17:40:34.879065 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:34.878976 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b045243_0c99_4991_8719_5efd0f27a340.slice/crio-6a230c8ecb6d62066f349ad4588f9fc9844c20616116727b2e79755e6c9d6415 WatchSource:0}: Error finding container 6a230c8ecb6d62066f349ad4588f9fc9844c20616116727b2e79755e6c9d6415: Status 404 returned error can't find the container with id 6a230c8ecb6d62066f349ad4588f9fc9844c20616116727b2e79755e6c9d6415 Apr 16 17:40:34.879624 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:34.879583 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fba2eed_4c37_4bcf_a44a_229c6669c76f.slice/crio-dfae79d42c78058084f036a71cde3dbc26abbb507224115eeb5a4b124b3242a4 WatchSource:0}: Error finding container dfae79d42c78058084f036a71cde3dbc26abbb507224115eeb5a4b124b3242a4: Status 404 returned error can't find the container with id dfae79d42c78058084f036a71cde3dbc26abbb507224115eeb5a4b124b3242a4 Apr 16 17:40:34.881433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:40:34.881327 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod455194d4_fde7_420d_8c1d_1e43000eb0a3.slice/crio-08535afecdab136442e118379be8fb060ce542dadefd5374cdef767a654d5c7d WatchSource:0}: Error finding container 08535afecdab136442e118379be8fb060ce542dadefd5374cdef767a654d5c7d: Status 404 returned error can't find the container with id 08535afecdab136442e118379be8fb060ce542dadefd5374cdef767a654d5c7d Apr 16 17:40:35.077521 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.077330 2560 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 17:35:33 +0000 UTC" deadline="2027-10-06 08:14:53.322429676 +0000 UTC" Apr 16 17:40:35.077521 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.077519 2560 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12902h34m18.24491599s" Apr 16 17:40:35.157936 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.157901 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:35.158117 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:35.158038 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:35.158117 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.158089 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:35.158285 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:35.158207 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:35.167189 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.167161 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m85xh" event={"ID":"455194d4-fde7-420d-8c1d-1e43000eb0a3","Type":"ContainerStarted","Data":"08535afecdab136442e118379be8fb060ce542dadefd5374cdef767a654d5c7d"} Apr 16 17:40:35.168128 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.168102 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-vn696" event={"ID":"55b71eac-1a4c-4273-baac-ae7691dbf264","Type":"ContainerStarted","Data":"69f9969b261906ad3794c2f88dffdb6166e3f3f84b36fa25bceed2445e96e41d"} Apr 16 17:40:35.169042 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.169012 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-trqz7" event={"ID":"f4e77261-e614-4a80-bbc5-28200547728b","Type":"ContainerStarted","Data":"00122421d30c0342bd85bfaa3e1668d4f48477abe41b320b9fed95aa5a5c2973"} Apr 16 17:40:35.169940 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.169918 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lh7ql" event={"ID":"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e","Type":"ContainerStarted","Data":"d4a4720cdcfb77f6f408d7ca417f690c797aedad2795d630d690032dea42a97a"} Apr 16 17:40:35.179035 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.178929 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal" event={"ID":"dbf1483115d1b5ae94f569f1ec8a827f","Type":"ContainerStarted","Data":"7e69a0b101ace689deb09fbadcf34ef07c79d00d3adeeb891f8a1e65decbf25c"} Apr 16 17:40:35.180697 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.180672 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" event={"ID":"7fba2eed-4c37-4bcf-a44a-229c6669c76f","Type":"ContainerStarted","Data":"dfae79d42c78058084f036a71cde3dbc26abbb507224115eeb5a4b124b3242a4"} Apr 16 17:40:35.185038 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.185017 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" event={"ID":"6b045243-0c99-4991-8719-5efd0f27a340","Type":"ContainerStarted","Data":"6a230c8ecb6d62066f349ad4588f9fc9844c20616116727b2e79755e6c9d6415"} Apr 16 17:40:35.188870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.188845 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-zmp7s" event={"ID":"5d3784fe-3481-43df-9a89-dda624c566b8","Type":"ContainerStarted","Data":"3d855d8b6d88846c5ab9ee973ed6d48b22127b40564dd630bdb5bc8202815986"} Apr 16 17:40:35.189996 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.189970 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" event={"ID":"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638","Type":"ContainerStarted","Data":"4036409cc115a4a059ed8f0cf3c2fbb0c2509448199fa749e58adaa5383dd047"} Apr 16 17:40:35.191512 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.191488 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-nppj4" event={"ID":"739c3ada-3675-42f5-afa7-51eba65d8c7e","Type":"ContainerStarted","Data":"d6bfd53cfd7e085f016a7e0c5a86dc8007b504faa6bb6858c4da607e75cb321b"} Apr 16 17:40:35.667080 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.666993 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:35.667238 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:35.667166 2560 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:35.667238 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:35.667236 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs podName:47012ffa-3deb-41b8-b770-fc4db562d87e nodeName:}" failed. No retries permitted until 2026-04-16 17:40:37.66721615 +0000 UTC m=+6.030425027 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs") pod "network-metrics-daemon-lx5nt" (UID: "47012ffa-3deb-41b8-b770-fc4db562d87e") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:35.720113 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.719947 2560 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 17:40:35.771044 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:35.770395 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sghmd\" (UniqueName: \"kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd\") pod \"network-check-target-cszgd\" (UID: \"29dc29ef-4848-44b6-bfa3-4a7545e874ce\") " pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:35.771044 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:35.770571 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:40:35.771044 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:35.770594 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:40:35.771044 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:35.770608 2560 projected.go:194] Error preparing data for projected volume kube-api-access-sghmd for pod openshift-network-diagnostics/network-check-target-cszgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:35.771044 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:35.770666 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd podName:29dc29ef-4848-44b6-bfa3-4a7545e874ce nodeName:}" failed. No retries permitted until 2026-04-16 17:40:37.77064852 +0000 UTC m=+6.133857385 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-sghmd" (UniqueName: "kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd") pod "network-check-target-cszgd" (UID: "29dc29ef-4848-44b6-bfa3-4a7545e874ce") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:36.216911 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:36.216540 2560 generic.go:358] "Generic (PLEG): container finished" podID="9f86db7b6fd9ec4db0b09003244343a1" containerID="eb3be1c0731bc7b8d1856f00fc5b2ef13225619d9e63b1495b59fd6f52728dc0" exitCode=0 Apr 16 17:40:36.216911 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:36.216742 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" event={"ID":"9f86db7b6fd9ec4db0b09003244343a1","Type":"ContainerDied","Data":"eb3be1c0731bc7b8d1856f00fc5b2ef13225619d9e63b1495b59fd6f52728dc0"} Apr 16 17:40:36.234875 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:36.233911 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-134-233.ec2.internal" podStartSLOduration=3.233889956 podStartE2EDuration="3.233889956s" podCreationTimestamp="2026-04-16 17:40:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:40:35.19430664 +0000 UTC m=+3.557515516" watchObservedRunningTime="2026-04-16 17:40:36.233889956 +0000 UTC m=+4.597098827" Apr 16 17:40:37.157678 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:37.157569 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:37.157678 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:37.157601 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:37.157909 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:37.157701 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:37.157909 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:37.157801 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:37.224795 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:37.224757 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" event={"ID":"9f86db7b6fd9ec4db0b09003244343a1","Type":"ContainerStarted","Data":"9967c878f4c0099d514a83c3cb086c4a5dc135aff9d6e0ded1f60f17c75cddd5"} Apr 16 17:40:37.689173 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:37.689135 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:37.689355 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:37.689287 2560 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:37.689355 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:37.689349 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs podName:47012ffa-3deb-41b8-b770-fc4db562d87e nodeName:}" failed. No retries permitted until 2026-04-16 17:40:41.689331545 +0000 UTC m=+10.052540398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs") pod "network-metrics-daemon-lx5nt" (UID: "47012ffa-3deb-41b8-b770-fc4db562d87e") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:37.789749 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:37.789709 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sghmd\" (UniqueName: \"kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd\") pod \"network-check-target-cszgd\" (UID: \"29dc29ef-4848-44b6-bfa3-4a7545e874ce\") " pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:37.789963 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:37.789909 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:40:37.789963 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:37.789932 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:40:37.789963 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:37.789946 2560 projected.go:194] Error preparing data for projected volume kube-api-access-sghmd for pod openshift-network-diagnostics/network-check-target-cszgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:37.790123 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:37.790005 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd podName:29dc29ef-4848-44b6-bfa3-4a7545e874ce nodeName:}" failed. No retries permitted until 2026-04-16 17:40:41.789986931 +0000 UTC m=+10.153195800 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-sghmd" (UniqueName: "kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd") pod "network-check-target-cszgd" (UID: "29dc29ef-4848-44b6-bfa3-4a7545e874ce") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:39.158342 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:39.158309 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:39.158824 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:39.158428 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:39.158824 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:39.158509 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:39.158824 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:39.158613 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:40.788889 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:40.788378 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-134-233.ec2.internal" podStartSLOduration=7.788357138 podStartE2EDuration="7.788357138s" podCreationTimestamp="2026-04-16 17:40:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:40:37.245553918 +0000 UTC m=+5.608762791" watchObservedRunningTime="2026-04-16 17:40:40.788357138 +0000 UTC m=+9.151566012" Apr 16 17:40:40.789391 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:40.789021 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-hzdjn"] Apr 16 17:40:40.791503 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:40.791415 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:40.791629 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:40.791578 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:40:40.915794 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:40.915744 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-kubelet-config\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:40.915794 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:40.915799 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-dbus\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:40.916057 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:40.915897 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:41.016329 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:41.016281 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-kubelet-config\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:41.016495 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:41.016339 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-dbus\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:41.016495 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:41.016375 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:41.016611 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.016511 2560 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:41.016611 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:41.016560 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-dbus\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:41.016611 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.016578 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret podName:0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5 nodeName:}" failed. No retries permitted until 2026-04-16 17:40:41.516558024 +0000 UTC m=+9.879766881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret") pod "global-pull-secret-syncer-hzdjn" (UID: "0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:41.016774 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:41.016633 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-kubelet-config\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:41.158160 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:41.158077 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:41.158318 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:41.158077 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:41.158318 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.158232 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:41.158318 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.158249 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:41.520702 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:41.520599 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:41.520895 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.520742 2560 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:41.520895 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.520818 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret podName:0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5 nodeName:}" failed. No retries permitted until 2026-04-16 17:40:42.520795853 +0000 UTC m=+10.884004716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret") pod "global-pull-secret-syncer-hzdjn" (UID: "0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:41.721778 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:41.721738 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:41.721993 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.721942 2560 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:41.722055 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.722012 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs podName:47012ffa-3deb-41b8-b770-fc4db562d87e nodeName:}" failed. No retries permitted until 2026-04-16 17:40:49.72199314 +0000 UTC m=+18.085201997 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs") pod "network-metrics-daemon-lx5nt" (UID: "47012ffa-3deb-41b8-b770-fc4db562d87e") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:41.822500 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:41.822402 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sghmd\" (UniqueName: \"kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd\") pod \"network-check-target-cszgd\" (UID: \"29dc29ef-4848-44b6-bfa3-4a7545e874ce\") " pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:41.822944 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.822576 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:40:41.822944 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.822598 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:40:41.822944 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.822609 2560 projected.go:194] Error preparing data for projected volume kube-api-access-sghmd for pod openshift-network-diagnostics/network-check-target-cszgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:41.822944 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:41.822657 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd podName:29dc29ef-4848-44b6-bfa3-4a7545e874ce nodeName:}" failed. No retries permitted until 2026-04-16 17:40:49.82264359 +0000 UTC m=+18.185852442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-sghmd" (UniqueName: "kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd") pod "network-check-target-cszgd" (UID: "29dc29ef-4848-44b6-bfa3-4a7545e874ce") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:42.158633 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:42.158603 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:42.158763 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:42.158726 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:40:42.528752 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:42.528190 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:42.528752 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:42.528327 2560 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:42.528752 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:42.528410 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret podName:0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5 nodeName:}" failed. No retries permitted until 2026-04-16 17:40:44.528394584 +0000 UTC m=+12.891603440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret") pod "global-pull-secret-syncer-hzdjn" (UID: "0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:43.157781 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:43.157750 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:43.158241 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:43.157916 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:43.158241 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:43.158051 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:43.158241 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:43.158146 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:44.158050 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:44.157965 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:44.158526 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:44.158097 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:40:44.542962 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:44.542807 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:44.542962 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:44.542941 2560 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:44.543192 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:44.543015 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret podName:0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5 nodeName:}" failed. No retries permitted until 2026-04-16 17:40:48.542995629 +0000 UTC m=+16.906204482 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret") pod "global-pull-secret-syncer-hzdjn" (UID: "0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:45.157892 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:45.157856 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:45.157892 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:45.157895 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:45.158080 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:45.157971 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:45.158406 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:45.158094 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:46.157520 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:46.157478 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:46.157730 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:46.157618 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:40:47.158235 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:47.158194 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:47.158235 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:47.158223 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:47.158746 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:47.158341 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:47.158746 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:47.158458 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:48.158021 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:48.157987 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:48.158224 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:48.158119 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:40:48.574245 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:48.574129 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:48.574803 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:48.574262 2560 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:48.574803 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:48.574327 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret podName:0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5 nodeName:}" failed. No retries permitted until 2026-04-16 17:40:56.574308759 +0000 UTC m=+24.937517615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret") pod "global-pull-secret-syncer-hzdjn" (UID: "0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:49.157916 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:49.157875 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:49.158093 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:49.157885 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:49.158093 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:49.158017 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:49.158093 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:49.158072 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:49.782235 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:49.782200 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:49.782712 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:49.782338 2560 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:49.782712 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:49.782394 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs podName:47012ffa-3deb-41b8-b770-fc4db562d87e nodeName:}" failed. No retries permitted until 2026-04-16 17:41:05.782380074 +0000 UTC m=+34.145588926 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs") pod "network-metrics-daemon-lx5nt" (UID: "47012ffa-3deb-41b8-b770-fc4db562d87e") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:40:49.883117 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:49.883078 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sghmd\" (UniqueName: \"kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd\") pod \"network-check-target-cszgd\" (UID: \"29dc29ef-4848-44b6-bfa3-4a7545e874ce\") " pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:49.883306 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:49.883255 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:40:49.883306 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:49.883276 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:40:49.883306 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:49.883286 2560 projected.go:194] Error preparing data for projected volume kube-api-access-sghmd for pod openshift-network-diagnostics/network-check-target-cszgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:49.883465 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:49.883341 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd podName:29dc29ef-4848-44b6-bfa3-4a7545e874ce nodeName:}" failed. No retries permitted until 2026-04-16 17:41:05.883320925 +0000 UTC m=+34.246529785 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-sghmd" (UniqueName: "kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd") pod "network-check-target-cszgd" (UID: "29dc29ef-4848-44b6-bfa3-4a7545e874ce") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:40:50.157539 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:50.157453 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:50.157702 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:50.157580 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:40:51.157524 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:51.157482 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:51.157992 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:51.157482 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:51.157992 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:51.157614 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:51.157992 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:51.157674 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:52.159466 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:52.159122 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:52.159466 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:52.159428 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:40:52.251439 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:52.251391 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-trqz7" event={"ID":"f4e77261-e614-4a80-bbc5-28200547728b","Type":"ContainerStarted","Data":"06b1037b3a448d40025e8bdd92b2b7066eed0cf0dc03245254768e23f10fcb87"} Apr 16 17:40:52.253020 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:52.252859 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" event={"ID":"6b045243-0c99-4991-8719-5efd0f27a340","Type":"ContainerStarted","Data":"981bd1f930ab4d0773e9741f14b3afd0b829539c79b111787b2774b4285df2b8"} Apr 16 17:40:52.257094 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:52.257068 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m85xh" event={"ID":"455194d4-fde7-420d-8c1d-1e43000eb0a3","Type":"ContainerStarted","Data":"96a5258bb263ff3eff2fb9c19e1c86c9df2022a12e836f236cbbd731413ed9fb"} Apr 16 17:40:52.259060 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:52.259023 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-vn696" event={"ID":"55b71eac-1a4c-4273-baac-ae7691dbf264","Type":"ContainerStarted","Data":"407898d813343f581c4dbb7013fbb0f4a014076ee98a9a012b387558ee78cfdf"} Apr 16 17:40:52.287800 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:52.287715 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-trqz7" podStartSLOduration=3.181387872 podStartE2EDuration="20.287700531s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:40:34.870163571 +0000 UTC m=+3.233372422" lastFinishedPulling="2026-04-16 17:40:51.976476213 +0000 UTC m=+20.339685081" observedRunningTime="2026-04-16 17:40:52.272051284 +0000 UTC m=+20.635260162" watchObservedRunningTime="2026-04-16 17:40:52.287700531 +0000 UTC m=+20.650909411" Apr 16 17:40:52.307977 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:52.307933 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-m85xh" podStartSLOduration=11.331078235 podStartE2EDuration="20.30791555s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:40:34.883058792 +0000 UTC m=+3.246267644" lastFinishedPulling="2026-04-16 17:40:43.859896104 +0000 UTC m=+12.223104959" observedRunningTime="2026-04-16 17:40:52.287671056 +0000 UTC m=+20.650879929" watchObservedRunningTime="2026-04-16 17:40:52.30791555 +0000 UTC m=+20.671124424" Apr 16 17:40:52.308279 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:52.308173 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-vn696" podStartSLOduration=3.212657735 podStartE2EDuration="20.308162345s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:40:34.881109746 +0000 UTC m=+3.244318598" lastFinishedPulling="2026-04-16 17:40:51.976614339 +0000 UTC m=+20.339823208" observedRunningTime="2026-04-16 17:40:52.30728331 +0000 UTC m=+20.670492186" watchObservedRunningTime="2026-04-16 17:40:52.308162345 +0000 UTC m=+20.671371218" Apr 16 17:40:53.158100 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.157901 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:53.158258 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.157902 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:53.158258 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:53.158192 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:53.158362 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:53.158254 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:53.262195 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.262098 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lh7ql" event={"ID":"d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e","Type":"ContainerStarted","Data":"c1395aafc1141b192981db48f49e680054254622cac2539f4292cfbf05b0f82c"} Apr 16 17:40:53.263467 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.263435 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" event={"ID":"7fba2eed-4c37-4bcf-a44a-229c6669c76f","Type":"ContainerStarted","Data":"8444ace4348ec15b14adbff01ce85381a044b38b5bb7ffdeee7e23b0a89a2ad3"} Apr 16 17:40:53.264799 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.264737 2560 generic.go:358] "Generic (PLEG): container finished" podID="6b045243-0c99-4991-8719-5efd0f27a340" containerID="981bd1f930ab4d0773e9741f14b3afd0b829539c79b111787b2774b4285df2b8" exitCode=0 Apr 16 17:40:53.264964 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.264809 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" event={"ID":"6b045243-0c99-4991-8719-5efd0f27a340","Type":"ContainerDied","Data":"981bd1f930ab4d0773e9741f14b3afd0b829539c79b111787b2774b4285df2b8"} Apr 16 17:40:53.266384 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.266272 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-zmp7s" event={"ID":"5d3784fe-3481-43df-9a89-dda624c566b8","Type":"ContainerStarted","Data":"82ba022d8c8d44f173bd7c92aef55ef6870ba71e6e79488c7f6b4c6dda022e29"} Apr 16 17:40:53.269037 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.269016 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" event={"ID":"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638","Type":"ContainerStarted","Data":"e4972fb8f7fa564a6e21c5a9be4dbfc3c03b9be13ef00f97bc823e449eead6f3"} Apr 16 17:40:53.269126 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.269045 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" event={"ID":"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638","Type":"ContainerStarted","Data":"bc69a2e633d9d87706e882773535cf38db97b11617a381565c4346ec8904b3a9"} Apr 16 17:40:53.269126 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.269058 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" event={"ID":"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638","Type":"ContainerStarted","Data":"1e7ed8ad4410bb9301e1debbbe1764aeaab8beaea966f932abd2fa6f868b0ea8"} Apr 16 17:40:53.269126 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.269067 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" event={"ID":"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638","Type":"ContainerStarted","Data":"3d9d47526090bb35033156b3bf1a4fb9c31cc9d0469e4fdec22ac70165ff44c5"} Apr 16 17:40:53.269126 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.269075 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" event={"ID":"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638","Type":"ContainerStarted","Data":"d9c668e148ba842f14f42d4f1309c9bf6913e4a1d856daeee3e1356d6df542fd"} Apr 16 17:40:53.269126 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.269083 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" event={"ID":"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638","Type":"ContainerStarted","Data":"ec94c45495b77908d26b0152644e19cbbada6c586b5b7a24c6827477b32366c5"} Apr 16 17:40:53.270260 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.270235 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-nppj4" event={"ID":"739c3ada-3675-42f5-afa7-51eba65d8c7e","Type":"ContainerStarted","Data":"f30f9ba058b9300ec63242374c38846e2d70ffd7ff35ab25d7bfbf72d70f3caf"} Apr 16 17:40:53.277610 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.277575 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-lh7ql" podStartSLOduration=4.362337045 podStartE2EDuration="21.277564792s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:40:34.869872617 +0000 UTC m=+3.233081483" lastFinishedPulling="2026-04-16 17:40:51.785100366 +0000 UTC m=+20.148309230" observedRunningTime="2026-04-16 17:40:53.277231549 +0000 UTC m=+21.640440422" watchObservedRunningTime="2026-04-16 17:40:53.277564792 +0000 UTC m=+21.640773665" Apr 16 17:40:53.307798 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.307773 2560 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 16 17:40:53.311952 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.311917 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-zmp7s" podStartSLOduration=4.235532864 podStartE2EDuration="21.311904668s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:40:34.8792785 +0000 UTC m=+3.242487354" lastFinishedPulling="2026-04-16 17:40:51.955650304 +0000 UTC m=+20.318859158" observedRunningTime="2026-04-16 17:40:53.290976532 +0000 UTC m=+21.654185406" watchObservedRunningTime="2026-04-16 17:40:53.311904668 +0000 UTC m=+21.675113541" Apr 16 17:40:53.325912 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.325874 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-nppj4" podStartSLOduration=4.316315493 podStartE2EDuration="21.32586183s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:40:34.872877715 +0000 UTC m=+3.236086565" lastFinishedPulling="2026-04-16 17:40:51.882424033 +0000 UTC m=+20.245632902" observedRunningTime="2026-04-16 17:40:53.325501092 +0000 UTC m=+21.688709965" watchObservedRunningTime="2026-04-16 17:40:53.32586183 +0000 UTC m=+21.689070704" Apr 16 17:40:53.990278 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.990243 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:53.990987 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:53.990966 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:54.108573 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:54.108477 2560 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-16T17:40:53.307793516Z","UUID":"2dfdc02f-4767-4104-898a-a036604f70cd","Handler":null,"Name":"","Endpoint":""} Apr 16 17:40:54.111454 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:54.111426 2560 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 16 17:40:54.111562 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:54.111491 2560 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 16 17:40:54.158243 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:54.158197 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:54.158411 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:54.158342 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:40:54.248653 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:54.248567 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-lh7ql_d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e/dns-node-resolver/0.log" Apr 16 17:40:54.274352 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:54.274321 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" event={"ID":"7fba2eed-4c37-4bcf-a44a-229c6669c76f","Type":"ContainerStarted","Data":"393815e19b5de194868e77461d94bc01946d1a36645e1f006bc6cf42ba6bd761"} Apr 16 17:40:54.274352 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:54.274354 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" event={"ID":"7fba2eed-4c37-4bcf-a44a-229c6669c76f","Type":"ContainerStarted","Data":"be6e01299dd25680bc56c4a42271a0d4b5c967cab774005af1ef07d4bb38535d"} Apr 16 17:40:54.275049 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:54.275032 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:54.275572 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:54.275550 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-nppj4" Apr 16 17:40:54.295162 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:54.295110 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-885h9" podStartSLOduration=3.083174405 podStartE2EDuration="22.295091775s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:40:34.881968567 +0000 UTC m=+3.245177422" lastFinishedPulling="2026-04-16 17:40:54.093885924 +0000 UTC m=+22.457094792" observedRunningTime="2026-04-16 17:40:54.294166788 +0000 UTC m=+22.657375674" watchObservedRunningTime="2026-04-16 17:40:54.295091775 +0000 UTC m=+22.658300652" Apr 16 17:40:55.037108 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:55.036881 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-m85xh_455194d4-fde7-420d-8c1d-1e43000eb0a3/node-ca/0.log" Apr 16 17:40:55.158170 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:55.158138 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:55.158353 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:55.158139 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:55.158353 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:55.158279 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:55.158353 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:55.158337 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:55.280038 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:55.280002 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" event={"ID":"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638","Type":"ContainerStarted","Data":"b4f2ff35a04b265e64d436597cead177404357a75d29e9bcaa6acb3bfbf5cf44"} Apr 16 17:40:56.157495 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:56.157451 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:56.157701 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:56.157596 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:40:56.637122 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:56.637036 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:56.637724 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:56.637185 2560 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:56.637724 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:56.637248 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret podName:0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:12.637234099 +0000 UTC m=+41.000442953 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret") pod "global-pull-secret-syncer-hzdjn" (UID: "0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:40:57.157941 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:57.157909 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:57.158098 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:57.158016 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:57.158098 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:57.158069 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:57.158186 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:57.158142 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:57.288349 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:57.287860 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" event={"ID":"5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638","Type":"ContainerStarted","Data":"45b2dd923da5308431399845c8be774eaffe53cbb04bd019bccbf473d6c01b53"} Apr 16 17:40:57.288349 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:57.288167 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:57.288349 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:57.288275 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:57.288349 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:57.288344 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:57.309342 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:57.307949 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:57.311922 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:57.311647 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:40:57.324001 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:57.322682 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" podStartSLOduration=7.928934818 podStartE2EDuration="25.322663105s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:40:34.876169544 +0000 UTC m=+3.239378394" lastFinishedPulling="2026-04-16 17:40:52.269897816 +0000 UTC m=+20.633106681" observedRunningTime="2026-04-16 17:40:57.321379159 +0000 UTC m=+25.684588032" watchObservedRunningTime="2026-04-16 17:40:57.322663105 +0000 UTC m=+25.685871995" Apr 16 17:40:58.158006 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:58.157968 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:40:58.158658 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:58.158103 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:40:58.290527 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:58.290489 2560 generic.go:358] "Generic (PLEG): container finished" podID="6b045243-0c99-4991-8719-5efd0f27a340" containerID="a13e17af5a18d8a1018d67023b6c1ab9e762ad4c7716ca5688df6f8d7deaa432" exitCode=0 Apr 16 17:40:58.290690 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:58.290529 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" event={"ID":"6b045243-0c99-4991-8719-5efd0f27a340","Type":"ContainerDied","Data":"a13e17af5a18d8a1018d67023b6c1ab9e762ad4c7716ca5688df6f8d7deaa432"} Apr 16 17:40:59.157622 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:59.157579 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:40:59.157754 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:59.157579 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:40:59.157754 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:59.157700 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:40:59.157816 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:40:59.157774 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:40:59.293736 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:59.293701 2560 generic.go:358] "Generic (PLEG): container finished" podID="6b045243-0c99-4991-8719-5efd0f27a340" containerID="d38e16fe675645b2bc386a46fc7bbc1b61e3cef4afe0e6cc0bc5c03f562aa342" exitCode=0 Apr 16 17:40:59.294157 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:40:59.293791 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" event={"ID":"6b045243-0c99-4991-8719-5efd0f27a340","Type":"ContainerDied","Data":"d38e16fe675645b2bc386a46fc7bbc1b61e3cef4afe0e6cc0bc5c03f562aa342"} Apr 16 17:41:00.158275 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:00.158191 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:00.158406 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:00.158297 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:41:00.297412 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:00.297379 2560 generic.go:358] "Generic (PLEG): container finished" podID="6b045243-0c99-4991-8719-5efd0f27a340" containerID="f1190a2cd3a1cd57b13d298644f092ac8bc6c6fb87ca42fe3d9208ff80a4133f" exitCode=0 Apr 16 17:41:00.297803 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:00.297433 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" event={"ID":"6b045243-0c99-4991-8719-5efd0f27a340","Type":"ContainerDied","Data":"f1190a2cd3a1cd57b13d298644f092ac8bc6c6fb87ca42fe3d9208ff80a4133f"} Apr 16 17:41:01.157792 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:01.157759 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:01.157972 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:01.157903 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:41:01.158017 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:01.157962 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:01.158110 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:01.158086 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:41:02.158853 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:02.158779 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:02.159343 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:02.158954 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:41:03.158094 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:03.158059 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:03.158254 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:03.158164 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:41:03.158254 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:03.158214 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:03.158336 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:03.158317 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:41:04.158318 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:04.158281 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:04.158733 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:04.158418 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:41:05.157777 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:05.157747 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:05.157777 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:05.157786 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:05.158042 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:05.157909 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:41:05.158102 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:05.158034 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:41:05.807447 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:05.807416 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:05.807892 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:05.807544 2560 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:05.807892 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:05.807597 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs podName:47012ffa-3deb-41b8-b770-fc4db562d87e nodeName:}" failed. No retries permitted until 2026-04-16 17:41:37.807581673 +0000 UTC m=+66.170790528 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs") pod "network-metrics-daemon-lx5nt" (UID: "47012ffa-3deb-41b8-b770-fc4db562d87e") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 17:41:05.908609 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:05.908579 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sghmd\" (UniqueName: \"kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd\") pod \"network-check-target-cszgd\" (UID: \"29dc29ef-4848-44b6-bfa3-4a7545e874ce\") " pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:05.908767 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:05.908711 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 17:41:05.908767 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:05.908725 2560 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 17:41:05.908767 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:05.908733 2560 projected.go:194] Error preparing data for projected volume kube-api-access-sghmd for pod openshift-network-diagnostics/network-check-target-cszgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:05.908893 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:05.908781 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd podName:29dc29ef-4848-44b6-bfa3-4a7545e874ce nodeName:}" failed. No retries permitted until 2026-04-16 17:41:37.90876712 +0000 UTC m=+66.271975971 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-sghmd" (UniqueName: "kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd") pod "network-check-target-cszgd" (UID: "29dc29ef-4848-44b6-bfa3-4a7545e874ce") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 17:41:06.157709 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:06.157686 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:06.157805 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:06.157782 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:41:06.310683 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:06.310650 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" event={"ID":"6b045243-0c99-4991-8719-5efd0f27a340","Type":"ContainerStarted","Data":"364acab41f693bd6a7a77be7d304895f5ccbaaff715d04c85b02b6b2dc23feec"} Apr 16 17:41:07.157950 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:07.157912 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:07.157950 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:07.157945 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:07.158487 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:07.158050 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:41:07.158487 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:07.158155 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:41:07.314591 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:07.314548 2560 generic.go:358] "Generic (PLEG): container finished" podID="6b045243-0c99-4991-8719-5efd0f27a340" containerID="364acab41f693bd6a7a77be7d304895f5ccbaaff715d04c85b02b6b2dc23feec" exitCode=0 Apr 16 17:41:07.314747 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:07.314618 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" event={"ID":"6b045243-0c99-4991-8719-5efd0f27a340","Type":"ContainerDied","Data":"364acab41f693bd6a7a77be7d304895f5ccbaaff715d04c85b02b6b2dc23feec"} Apr 16 17:41:08.157678 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:08.157503 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:08.157868 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:08.157757 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:41:08.319556 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:08.319520 2560 generic.go:358] "Generic (PLEG): container finished" podID="6b045243-0c99-4991-8719-5efd0f27a340" containerID="a3fc528e35a696a1c37c44121a5d42a9dda1c9361b49d811ba1d7bb9b7815f84" exitCode=0 Apr 16 17:41:08.319934 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:08.319581 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" event={"ID":"6b045243-0c99-4991-8719-5efd0f27a340","Type":"ContainerDied","Data":"a3fc528e35a696a1c37c44121a5d42a9dda1c9361b49d811ba1d7bb9b7815f84"} Apr 16 17:41:09.157390 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:09.157359 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:09.157590 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:09.157359 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:09.157590 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:09.157496 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:41:09.157590 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:09.157532 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:41:09.324895 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:09.324849 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" event={"ID":"6b045243-0c99-4991-8719-5efd0f27a340","Type":"ContainerStarted","Data":"991662b7b3036533d6fa0ebd96b6b51f91a55c92937515352160474c6fe2fda6"} Apr 16 17:41:09.346772 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:09.346734 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6fzxn" podStartSLOduration=6.185117808 podStartE2EDuration="37.346722967s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:40:34.882085977 +0000 UTC m=+3.245294844" lastFinishedPulling="2026-04-16 17:41:06.043691138 +0000 UTC m=+34.406900003" observedRunningTime="2026-04-16 17:41:09.345942971 +0000 UTC m=+37.709151844" watchObservedRunningTime="2026-04-16 17:41:09.346722967 +0000 UTC m=+37.709931840" Apr 16 17:41:10.158057 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:10.158027 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:10.158227 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:10.158120 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:41:11.158073 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:11.158037 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:11.158521 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:11.158036 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:11.158521 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:11.158163 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:41:11.158521 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:11.158210 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:41:12.157993 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:12.157962 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:12.158183 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:12.158062 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:41:12.659342 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:12.659296 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:12.659514 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:12.659408 2560 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:12.659514 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:12.659463 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret podName:0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:44.659447 +0000 UTC m=+73.022655854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret") pod "global-pull-secret-syncer-hzdjn" (UID: "0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5") : object "kube-system"/"original-pull-secret" not registered Apr 16 17:41:13.157704 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:13.157676 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:13.157894 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:13.157675 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:13.157894 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:13.157775 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:41:13.157894 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:13.157862 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:41:14.158349 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:14.158316 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:14.159480 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:14.159450 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:41:14.167185 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:14.167158 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-hzdjn"] Apr 16 17:41:14.169670 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:14.169647 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-cszgd"] Apr 16 17:41:14.169911 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:14.169897 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:14.170155 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:14.170132 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:41:14.170393 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:14.170377 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lx5nt"] Apr 16 17:41:14.170481 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:14.170458 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:14.170581 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:14.170561 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:41:14.333313 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:14.333285 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:14.333492 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:14.333379 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:41:16.157253 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:16.157220 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:16.157698 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:16.157220 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:16.157698 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:16.157324 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lx5nt" podUID="47012ffa-3deb-41b8-b770-fc4db562d87e" Apr 16 17:41:16.157698 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:16.157221 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:16.157698 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:16.157407 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cszgd" podUID="29dc29ef-4848-44b6-bfa3-4a7545e874ce" Apr 16 17:41:16.157698 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:16.157495 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-hzdjn" podUID="0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5" Apr 16 17:41:17.974166 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:17.973933 2560 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-134-233.ec2.internal" event="NodeReady" Apr 16 17:41:17.974511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:17.974206 2560 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 16 17:41:18.012204 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.012174 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-c8cf4fc8d-nd8l8"] Apr 16 17:41:18.061290 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.061260 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-v5h9l"] Apr 16 17:41:18.061435 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.061398 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.063452 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.063427 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 16 17:41:18.063634 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.063617 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 16 17:41:18.064021 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.064005 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 16 17:41:18.064074 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.064047 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-f5d25\"" Apr 16 17:41:18.068007 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.067984 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 16 17:41:18.085735 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.085616 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sfb6x"] Apr 16 17:41:18.085908 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.085892 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.088845 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.088792 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 16 17:41:18.089656 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.089636 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 16 17:41:18.089656 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.089642 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-bfzld\"" Apr 16 17:41:18.089805 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.089639 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 16 17:41:18.089894 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.089882 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 16 17:41:18.098251 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098231 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2bmr\" (UniqueName: \"kubernetes.io/projected/50e70fc1-7fd4-4977-949f-deb61937aea3-kube-api-access-x2bmr\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.098330 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098258 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50e70fc1-7fd4-4977-949f-deb61937aea3-registry-tls\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.098330 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098315 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/0398aee0-b61d-4565-b06a-f1ee93b347aa-data-volume\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.098408 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098338 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/50e70fc1-7fd4-4977-949f-deb61937aea3-image-registry-private-configuration\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.098408 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098354 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50e70fc1-7fd4-4977-949f-deb61937aea3-ca-trust-extracted\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.098408 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098389 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50e70fc1-7fd4-4977-949f-deb61937aea3-trusted-ca\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.098497 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098414 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/0398aee0-b61d-4565-b06a-f1ee93b347aa-crio-socket\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.098497 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098433 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50e70fc1-7fd4-4977-949f-deb61937aea3-bound-sa-token\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.098797 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098450 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/0398aee0-b61d-4565-b06a-f1ee93b347aa-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.098797 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098755 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50e70fc1-7fd4-4977-949f-deb61937aea3-registry-certificates\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.098978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098799 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50e70fc1-7fd4-4977-949f-deb61937aea3-installation-pull-secrets\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.098978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098845 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwj2r\" (UniqueName: \"kubernetes.io/projected/0398aee0-b61d-4565-b06a-f1ee93b347aa-kube-api-access-zwj2r\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.098978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.098890 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/0398aee0-b61d-4565-b06a-f1ee93b347aa-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.105646 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.105625 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-c8cf4fc8d-nd8l8"] Apr 16 17:41:18.105732 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.105651 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-v5h9l"] Apr 16 17:41:18.105732 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.105666 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sfb6x"] Apr 16 17:41:18.105799 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.105763 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.109395 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.109375 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-tjhv5\"" Apr 16 17:41:18.109496 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.109456 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 16 17:41:18.109496 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.109467 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 16 17:41:18.131575 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.131550 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-x6fv7"] Apr 16 17:41:18.163100 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.163067 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:18.163224 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.163117 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x6fv7" Apr 16 17:41:18.163224 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.163159 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:18.163321 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.163260 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:18.165304 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.165277 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 17:41:18.165438 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.165323 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-wmwn6\"" Apr 16 17:41:18.165438 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.165402 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-j7r56\"" Apr 16 17:41:18.167616 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.167586 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 17:41:18.167616 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.167611 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 16 17:41:18.167777 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.167622 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 16 17:41:18.167860 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.167776 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 16 17:41:18.167914 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.167879 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-jmq5l\"" Apr 16 17:41:18.167972 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.167875 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x6fv7"] Apr 16 17:41:18.168165 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.168147 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 16 17:41:18.168269 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.168192 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 17:41:18.199604 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.199584 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxhgs\" (UniqueName: \"kubernetes.io/projected/22e883fa-c3e0-4a77-ab30-0e3840eab93d-kube-api-access-cxhgs\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.199710 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.199615 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/0398aee0-b61d-4565-b06a-f1ee93b347aa-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.199710 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.199646 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x2bmr\" (UniqueName: \"kubernetes.io/projected/50e70fc1-7fd4-4977-949f-deb61937aea3-kube-api-access-x2bmr\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.199710 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.199672 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/22e883fa-c3e0-4a77-ab30-0e3840eab93d-tmp-dir\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.199710 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.199701 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50e70fc1-7fd4-4977-949f-deb61937aea3-registry-tls\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.199911 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.199730 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22e883fa-c3e0-4a77-ab30-0e3840eab93d-metrics-tls\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.199911 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.199874 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf5hr\" (UniqueName: \"kubernetes.io/projected/26138095-7f4c-401b-ac1c-5d1d5e047af0-kube-api-access-lf5hr\") pod \"ingress-canary-x6fv7\" (UID: \"26138095-7f4c-401b-ac1c-5d1d5e047af0\") " pod="openshift-ingress-canary/ingress-canary-x6fv7" Apr 16 17:41:18.199985 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.199918 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22e883fa-c3e0-4a77-ab30-0e3840eab93d-config-volume\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.200035 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.199987 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/0398aee0-b61d-4565-b06a-f1ee93b347aa-data-volume\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.200035 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200021 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/50e70fc1-7fd4-4977-949f-deb61937aea3-image-registry-private-configuration\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.200100 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200040 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50e70fc1-7fd4-4977-949f-deb61937aea3-ca-trust-extracted\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.200100 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200060 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50e70fc1-7fd4-4977-949f-deb61937aea3-trusted-ca\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.200100 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200077 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/0398aee0-b61d-4565-b06a-f1ee93b347aa-crio-socket\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.200100 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200095 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26138095-7f4c-401b-ac1c-5d1d5e047af0-cert\") pod \"ingress-canary-x6fv7\" (UID: \"26138095-7f4c-401b-ac1c-5d1d5e047af0\") " pod="openshift-ingress-canary/ingress-canary-x6fv7" Apr 16 17:41:18.200277 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200125 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50e70fc1-7fd4-4977-949f-deb61937aea3-bound-sa-token\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.200277 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200150 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/0398aee0-b61d-4565-b06a-f1ee93b347aa-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.200277 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200188 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50e70fc1-7fd4-4977-949f-deb61937aea3-registry-certificates\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.200277 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200223 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50e70fc1-7fd4-4977-949f-deb61937aea3-installation-pull-secrets\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.200277 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200248 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zwj2r\" (UniqueName: \"kubernetes.io/projected/0398aee0-b61d-4565-b06a-f1ee93b347aa-kube-api-access-zwj2r\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.200505 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200349 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/0398aee0-b61d-4565-b06a-f1ee93b347aa-crio-socket\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.200505 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.200456 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50e70fc1-7fd4-4977-949f-deb61937aea3-ca-trust-extracted\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.201192 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.201111 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50e70fc1-7fd4-4977-949f-deb61937aea3-trusted-ca\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.201293 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.201230 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50e70fc1-7fd4-4977-949f-deb61937aea3-registry-certificates\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.204009 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.203989 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50e70fc1-7fd4-4977-949f-deb61937aea3-installation-pull-secrets\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.204110 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.204014 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/50e70fc1-7fd4-4977-949f-deb61937aea3-image-registry-private-configuration\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.204150 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.204131 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50e70fc1-7fd4-4977-949f-deb61937aea3-registry-tls\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.209043 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.209020 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2bmr\" (UniqueName: \"kubernetes.io/projected/50e70fc1-7fd4-4977-949f-deb61937aea3-kube-api-access-x2bmr\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.210212 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.210186 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50e70fc1-7fd4-4977-949f-deb61937aea3-bound-sa-token\") pod \"image-registry-c8cf4fc8d-nd8l8\" (UID: \"50e70fc1-7fd4-4977-949f-deb61937aea3\") " pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.210318 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.210258 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/0398aee0-b61d-4565-b06a-f1ee93b347aa-data-volume\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.210318 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.210308 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/0398aee0-b61d-4565-b06a-f1ee93b347aa-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.211811 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.211792 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/0398aee0-b61d-4565-b06a-f1ee93b347aa-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.212129 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.212110 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwj2r\" (UniqueName: \"kubernetes.io/projected/0398aee0-b61d-4565-b06a-f1ee93b347aa-kube-api-access-zwj2r\") pod \"insights-runtime-extractor-v5h9l\" (UID: \"0398aee0-b61d-4565-b06a-f1ee93b347aa\") " pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.301241 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.301207 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/22e883fa-c3e0-4a77-ab30-0e3840eab93d-tmp-dir\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.301386 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.301262 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22e883fa-c3e0-4a77-ab30-0e3840eab93d-metrics-tls\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.301426 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.301377 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lf5hr\" (UniqueName: \"kubernetes.io/projected/26138095-7f4c-401b-ac1c-5d1d5e047af0-kube-api-access-lf5hr\") pod \"ingress-canary-x6fv7\" (UID: \"26138095-7f4c-401b-ac1c-5d1d5e047af0\") " pod="openshift-ingress-canary/ingress-canary-x6fv7" Apr 16 17:41:18.301460 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.301420 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22e883fa-c3e0-4a77-ab30-0e3840eab93d-config-volume\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.301490 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.301478 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26138095-7f4c-401b-ac1c-5d1d5e047af0-cert\") pod \"ingress-canary-x6fv7\" (UID: \"26138095-7f4c-401b-ac1c-5d1d5e047af0\") " pod="openshift-ingress-canary/ingress-canary-x6fv7" Apr 16 17:41:18.301548 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.301533 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cxhgs\" (UniqueName: \"kubernetes.io/projected/22e883fa-c3e0-4a77-ab30-0e3840eab93d-kube-api-access-cxhgs\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.301592 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.301578 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/22e883fa-c3e0-4a77-ab30-0e3840eab93d-tmp-dir\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.302028 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.302004 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22e883fa-c3e0-4a77-ab30-0e3840eab93d-config-volume\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.303581 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.303555 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22e883fa-c3e0-4a77-ab30-0e3840eab93d-metrics-tls\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.303793 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.303776 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26138095-7f4c-401b-ac1c-5d1d5e047af0-cert\") pod \"ingress-canary-x6fv7\" (UID: \"26138095-7f4c-401b-ac1c-5d1d5e047af0\") " pod="openshift-ingress-canary/ingress-canary-x6fv7" Apr 16 17:41:18.310769 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.310744 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxhgs\" (UniqueName: \"kubernetes.io/projected/22e883fa-c3e0-4a77-ab30-0e3840eab93d-kube-api-access-cxhgs\") pod \"dns-default-sfb6x\" (UID: \"22e883fa-c3e0-4a77-ab30-0e3840eab93d\") " pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.310871 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.310786 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf5hr\" (UniqueName: \"kubernetes.io/projected/26138095-7f4c-401b-ac1c-5d1d5e047af0-kube-api-access-lf5hr\") pod \"ingress-canary-x6fv7\" (UID: \"26138095-7f4c-401b-ac1c-5d1d5e047af0\") " pod="openshift-ingress-canary/ingress-canary-x6fv7" Apr 16 17:41:18.371025 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.370998 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:18.393758 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.393725 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-v5h9l" Apr 16 17:41:18.413679 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.413650 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:18.475143 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.474694 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x6fv7" Apr 16 17:41:18.570343 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.570291 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-c8cf4fc8d-nd8l8"] Apr 16 17:41:18.587180 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.576230 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-v5h9l"] Apr 16 17:41:18.587180 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.576276 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sfb6x"] Apr 16 17:41:18.641695 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:18.641669 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x6fv7"] Apr 16 17:41:18.645497 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:41:18.645469 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26138095_7f4c_401b_ac1c_5d1d5e047af0.slice/crio-da5fe8f7d9d6bb8aaed2b97fab5d1a11084e16510183e8e2bcf54113c3a83b71 WatchSource:0}: Error finding container da5fe8f7d9d6bb8aaed2b97fab5d1a11084e16510183e8e2bcf54113c3a83b71: Status 404 returned error can't find the container with id da5fe8f7d9d6bb8aaed2b97fab5d1a11084e16510183e8e2bcf54113c3a83b71 Apr 16 17:41:19.345530 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:19.345229 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-v5h9l" event={"ID":"0398aee0-b61d-4565-b06a-f1ee93b347aa","Type":"ContainerStarted","Data":"c28452ca1357e9b99dc6f118d766fe655f852a44454f22f487c66586cd7bf62c"} Apr 16 17:41:19.345530 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:19.345478 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-v5h9l" event={"ID":"0398aee0-b61d-4565-b06a-f1ee93b347aa","Type":"ContainerStarted","Data":"23097e4d18941092642cf980231a64e2da4347cd43763c47d28cd922770f73f0"} Apr 16 17:41:19.348359 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:19.348325 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x6fv7" event={"ID":"26138095-7f4c-401b-ac1c-5d1d5e047af0","Type":"ContainerStarted","Data":"da5fe8f7d9d6bb8aaed2b97fab5d1a11084e16510183e8e2bcf54113c3a83b71"} Apr 16 17:41:19.354288 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:19.351315 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" event={"ID":"50e70fc1-7fd4-4977-949f-deb61937aea3","Type":"ContainerStarted","Data":"4355b9ae7f3ce4ba8f95caa849eca839ea6fd21d9047538cead3129572e65f7a"} Apr 16 17:41:19.354288 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:19.351352 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" event={"ID":"50e70fc1-7fd4-4977-949f-deb61937aea3","Type":"ContainerStarted","Data":"cb5ef083d99b93cb4333571cb308da95c8c7c6f366ff2e8ae7a6145fcbaae6bb"} Apr 16 17:41:19.354288 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:19.352218 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:19.354288 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:19.353441 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sfb6x" event={"ID":"22e883fa-c3e0-4a77-ab30-0e3840eab93d","Type":"ContainerStarted","Data":"f660d5380a589d3d8a3321ba82b4983a0fccf5fca2d026ffd2dfa6c33187ab83"} Apr 16 17:41:19.372771 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:19.372696 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" podStartSLOduration=7.372676782 podStartE2EDuration="7.372676782s" podCreationTimestamp="2026-04-16 17:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:41:19.372326305 +0000 UTC m=+47.735535179" watchObservedRunningTime="2026-04-16 17:41:19.372676782 +0000 UTC m=+47.735885656" Apr 16 17:41:21.360624 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:21.360597 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sfb6x" event={"ID":"22e883fa-c3e0-4a77-ab30-0e3840eab93d","Type":"ContainerStarted","Data":"37721e2348e41a76eadb87572f953153f72f29390eac1bf9e1996ff830cf4f0c"} Apr 16 17:41:21.362090 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:21.362071 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-v5h9l" event={"ID":"0398aee0-b61d-4565-b06a-f1ee93b347aa","Type":"ContainerStarted","Data":"3bbaa9650279cdf25b26d5fb8682af6174622857f5fc2a569d1002dc666f2deb"} Apr 16 17:41:21.363281 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:21.363255 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x6fv7" event={"ID":"26138095-7f4c-401b-ac1c-5d1d5e047af0","Type":"ContainerStarted","Data":"1c9beca27c653daba1cf8c8efd4973aac266861c552063fc0d68b915f6d7e12b"} Apr 16 17:41:21.387861 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:21.387753 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-x6fv7" podStartSLOduration=1.000101586 podStartE2EDuration="3.387731934s" podCreationTimestamp="2026-04-16 17:41:18 +0000 UTC" firstStartedPulling="2026-04-16 17:41:18.648063084 +0000 UTC m=+47.011271939" lastFinishedPulling="2026-04-16 17:41:21.035693432 +0000 UTC m=+49.398902287" observedRunningTime="2026-04-16 17:41:21.386744751 +0000 UTC m=+49.749953648" watchObservedRunningTime="2026-04-16 17:41:21.387731934 +0000 UTC m=+49.750940809" Apr 16 17:41:22.368716 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:22.368669 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sfb6x" event={"ID":"22e883fa-c3e0-4a77-ab30-0e3840eab93d","Type":"ContainerStarted","Data":"1c47c5a7101c9231958e447853df7b50d964769286c38f8e4a5f367a3bf4b479"} Apr 16 17:41:22.390094 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:22.389873 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sfb6x" podStartSLOduration=1.95804133 podStartE2EDuration="4.389856455s" podCreationTimestamp="2026-04-16 17:41:18 +0000 UTC" firstStartedPulling="2026-04-16 17:41:18.59905481 +0000 UTC m=+46.962263661" lastFinishedPulling="2026-04-16 17:41:21.030869922 +0000 UTC m=+49.394078786" observedRunningTime="2026-04-16 17:41:22.38869271 +0000 UTC m=+50.751901606" watchObservedRunningTime="2026-04-16 17:41:22.389856455 +0000 UTC m=+50.753065333" Apr 16 17:41:23.372658 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:23.372620 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-v5h9l" event={"ID":"0398aee0-b61d-4565-b06a-f1ee93b347aa","Type":"ContainerStarted","Data":"7da6c8e5b11bb5b31b3867ff689b15a2f0fecf1a4da281d1a657605dcd35b27b"} Apr 16 17:41:23.373064 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:23.372714 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:23.390964 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:23.390914 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-v5h9l" podStartSLOduration=1.451933858 podStartE2EDuration="5.390899314s" podCreationTimestamp="2026-04-16 17:41:18 +0000 UTC" firstStartedPulling="2026-04-16 17:41:18.715230749 +0000 UTC m=+47.078439603" lastFinishedPulling="2026-04-16 17:41:22.654196189 +0000 UTC m=+51.017405059" observedRunningTime="2026-04-16 17:41:23.390086525 +0000 UTC m=+51.753295398" watchObservedRunningTime="2026-04-16 17:41:23.390899314 +0000 UTC m=+51.754108186" Apr 16 17:41:25.326571 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.326541 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r"] Apr 16 17:41:25.331324 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.331303 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.333361 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.333339 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 16 17:41:25.333717 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.333695 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-dockercfg-tfbjr\"" Apr 16 17:41:25.333960 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.333945 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 16 17:41:25.334168 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.334147 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-tls\"" Apr 16 17:41:25.334487 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.334469 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-kube-rbac-proxy-config\"" Apr 16 17:41:25.334561 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.334478 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 16 17:41:25.338370 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.338348 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7479c89684-fh86k"] Apr 16 17:41:25.341752 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.341734 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.342035 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.342013 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r"] Apr 16 17:41:25.343865 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.343823 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-custom-resource-state-configmap\"" Apr 16 17:41:25.343952 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.343927 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-kube-rbac-proxy-config\"" Apr 16 17:41:25.344504 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.344487 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-tls\"" Apr 16 17:41:25.344584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.344488 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-dockercfg-7p2kr\"" Apr 16 17:41:25.352280 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.352261 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7479c89684-fh86k"] Apr 16 17:41:25.355861 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.355828 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-vglhr"] Apr 16 17:41:25.358978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.358957 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/93b79d64-b19d-4b29-820a-e2c33293ae57-metrics-client-ca\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.359063 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.358992 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-state-metrics-tls\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.359063 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.359024 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.359063 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.359050 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.359235 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.359150 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/93b79d64-b19d-4b29-820a-e2c33293ae57-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.359235 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.359182 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5r6d\" (UniqueName: \"kubernetes.io/projected/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-api-access-m5r6d\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.359235 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.359212 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-volume-directive-shadow\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.359360 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.359235 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-metrics-client-ca\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.359360 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.359262 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczhd\" (UniqueName: \"kubernetes.io/projected/93b79d64-b19d-4b29-820a-e2c33293ae57-kube-api-access-vczhd\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.359360 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.359317 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.359472 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.359364 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/93b79d64-b19d-4b29-820a-e2c33293ae57-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.361237 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.361220 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 16 17:41:25.361314 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.361280 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 16 17:41:25.362847 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.362820 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-w7bpg\"" Apr 16 17:41:25.362942 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.362870 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 16 17:41:25.460053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460024 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/93b79d64-b19d-4b29-820a-e2c33293ae57-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.460053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460056 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m5r6d\" (UniqueName: \"kubernetes.io/projected/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-api-access-m5r6d\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.460306 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460074 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-volume-directive-shadow\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.460306 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460101 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.460306 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460130 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-metrics-client-ca\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.460306 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460159 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vczhd\" (UniqueName: \"kubernetes.io/projected/93b79d64-b19d-4b29-820a-e2c33293ae57-kube-api-access-vczhd\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.460306 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460184 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-wtmp\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.460306 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460217 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.460306 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460243 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-textfile\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.460306 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460277 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/93b79d64-b19d-4b29-820a-e2c33293ae57-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.460306 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460305 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mtj\" (UniqueName: \"kubernetes.io/projected/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-kube-api-access-w4mtj\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.460742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460368 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-metrics-client-ca\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.460742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460402 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-accelerators-collector-config\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.460742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460434 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/93b79d64-b19d-4b29-820a-e2c33293ae57-metrics-client-ca\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.460742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460460 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-state-metrics-tls\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.460742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460490 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.460742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460519 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-sys\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.460742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460529 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-volume-directive-shadow\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.460742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460560 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-root\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.460742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460584 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-tls\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.461260 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.460981 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-metrics-client-ca\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.461260 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.461169 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.461260 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.461217 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/93b79d64-b19d-4b29-820a-e2c33293ae57-metrics-client-ca\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.464442 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.464418 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/93b79d64-b19d-4b29-820a-e2c33293ae57-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.464442 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.464432 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-state-metrics-tls\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.464591 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.464426 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/93b79d64-b19d-4b29-820a-e2c33293ae57-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.464591 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.464525 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.468608 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.468582 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vczhd\" (UniqueName: \"kubernetes.io/projected/93b79d64-b19d-4b29-820a-e2c33293ae57-kube-api-access-vczhd\") pod \"openshift-state-metrics-5669946b84-n8w5r\" (UID: \"93b79d64-b19d-4b29-820a-e2c33293ae57\") " pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.468976 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.468957 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5r6d\" (UniqueName: \"kubernetes.io/projected/8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408-kube-api-access-m5r6d\") pod \"kube-state-metrics-7479c89684-fh86k\" (UID: \"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408\") " pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.561800 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.561762 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-metrics-client-ca\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.561800 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.561804 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-accelerators-collector-config\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.561853 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-sys\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.561889 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-root\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.561912 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-tls\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.561963 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-sys\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.561974 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.561987 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-root\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562339 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:25.562062 2560 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 16 17:41:25.562339 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.562100 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-wtmp\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562339 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:41:25.562116 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-tls podName:c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5 nodeName:}" failed. No retries permitted until 2026-04-16 17:41:26.062102364 +0000 UTC m=+54.425311218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-tls") pod "node-exporter-vglhr" (UID: "c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5") : secret "node-exporter-tls" not found Apr 16 17:41:25.562339 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.562142 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-textfile\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562339 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.562169 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mtj\" (UniqueName: \"kubernetes.io/projected/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-kube-api-access-w4mtj\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562339 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.562215 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-wtmp\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562581 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.562400 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-metrics-client-ca\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562581 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.562430 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-textfile\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.562581 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.562461 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-accelerators-collector-config\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.564205 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.564183 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.575133 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.575106 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mtj\" (UniqueName: \"kubernetes.io/projected/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-kube-api-access-w4mtj\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:25.641763 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.641695 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" Apr 16 17:41:25.650615 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.650599 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" Apr 16 17:41:25.767204 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.767178 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r"] Apr 16 17:41:25.770391 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:41:25.770369 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93b79d64_b19d_4b29_820a_e2c33293ae57.slice/crio-732f6b7bdb542e17364c4b7556bb8513245414ab9f644b865e298ff88c2c8fc3 WatchSource:0}: Error finding container 732f6b7bdb542e17364c4b7556bb8513245414ab9f644b865e298ff88c2c8fc3: Status 404 returned error can't find the container with id 732f6b7bdb542e17364c4b7556bb8513245414ab9f644b865e298ff88c2c8fc3 Apr 16 17:41:25.790134 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:25.790112 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7479c89684-fh86k"] Apr 16 17:41:25.792049 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:41:25.792025 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e78c1ad_aeca_46d0_8ac7_d3fb5c6b4408.slice/crio-2385bd20ce468fccc5bad6ce972340a8c45996a48e4ef33994140f3b9473dffe WatchSource:0}: Error finding container 2385bd20ce468fccc5bad6ce972340a8c45996a48e4ef33994140f3b9473dffe: Status 404 returned error can't find the container with id 2385bd20ce468fccc5bad6ce972340a8c45996a48e4ef33994140f3b9473dffe Apr 16 17:41:26.066019 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.065988 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-tls\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:26.068271 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.068253 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5-node-exporter-tls\") pod \"node-exporter-vglhr\" (UID: \"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5\") " pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:26.266774 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.266695 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-vglhr" Apr 16 17:41:26.276102 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:41:26.276071 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0b5d7a1_ed7b_42bb_9eaf_53320eb9c1c5.slice/crio-33e0111c6621d459bafc7859a832bca9de951c068acc0eb1ff60461cb3228e7c WatchSource:0}: Error finding container 33e0111c6621d459bafc7859a832bca9de951c068acc0eb1ff60461cb3228e7c: Status 404 returned error can't find the container with id 33e0111c6621d459bafc7859a832bca9de951c068acc0eb1ff60461cb3228e7c Apr 16 17:41:26.375661 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.375630 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d457dc5fd-vdzvv"] Apr 16 17:41:26.379788 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.379768 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.384053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.382112 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 16 17:41:26.384053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.382181 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 16 17:41:26.384053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.382859 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 16 17:41:26.384053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.383106 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 16 17:41:26.384053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.383671 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 16 17:41:26.384053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.383680 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-65zlz\"" Apr 16 17:41:26.384053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.383822 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 16 17:41:26.384439 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.384284 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 16 17:41:26.387902 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.387866 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-vglhr" event={"ID":"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5","Type":"ContainerStarted","Data":"33e0111c6621d459bafc7859a832bca9de951c068acc0eb1ff60461cb3228e7c"} Apr 16 17:41:26.389117 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.389089 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" event={"ID":"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408","Type":"ContainerStarted","Data":"2385bd20ce468fccc5bad6ce972340a8c45996a48e4ef33994140f3b9473dffe"} Apr 16 17:41:26.391365 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.391342 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d457dc5fd-vdzvv"] Apr 16 17:41:26.392670 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.392649 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" event={"ID":"93b79d64-b19d-4b29-820a-e2c33293ae57","Type":"ContainerStarted","Data":"7bf2409e757f8af5b5fdf9e300afee167d0e130c54a46472a2d40968e6d5b3b2"} Apr 16 17:41:26.392761 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.392679 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" event={"ID":"93b79d64-b19d-4b29-820a-e2c33293ae57","Type":"ContainerStarted","Data":"496b8f93edc83fb9e294e3f890cf4d4ac417753ed98d97f0f0d1670b4230e0c0"} Apr 16 17:41:26.392761 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.392694 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" event={"ID":"93b79d64-b19d-4b29-820a-e2c33293ae57","Type":"ContainerStarted","Data":"732f6b7bdb542e17364c4b7556bb8513245414ab9f644b865e298ff88c2c8fc3"} Apr 16 17:41:26.422505 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.422476 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 17:41:26.426544 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.426529 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.429107 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.429071 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 16 17:41:26.429281 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.429264 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 16 17:41:26.429363 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.429324 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 16 17:41:26.429363 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.429356 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 16 17:41:26.429686 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.429671 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-z696p\"" Apr 16 17:41:26.429735 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.429686 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 16 17:41:26.429796 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.429780 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 16 17:41:26.429945 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.429930 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 16 17:41:26.430044 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.430024 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 16 17:41:26.430179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.430086 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 16 17:41:26.438916 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.438898 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 17:41:26.468359 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468326 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-config\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.468467 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468362 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.468467 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468422 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-serving-cert\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.468589 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468465 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.468589 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468496 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-web-config\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.468589 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468524 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-oauth-config\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.468589 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468557 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.468780 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468615 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2czmp\" (UniqueName: \"kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-kube-api-access-2czmp\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.468780 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468655 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.468780 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468726 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-service-ca\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.468780 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468756 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-oauth-serving-cert\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.468996 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468785 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-config-out\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.468996 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468811 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw4vs\" (UniqueName: \"kubernetes.io/projected/a8248209-1c6a-4310-975e-5d3dd74f7b55-kube-api-access-bw4vs\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.468996 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468891 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.468996 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468926 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-config-volume\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.468996 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.468968 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-tls-assets\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.469239 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.469037 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.469239 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.469100 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.469239 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.469134 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.570354 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.570320 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.570354 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.570358 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.570604 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.570405 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-config\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.570604 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.570432 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.570604 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.570453 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-serving-cert\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.570604 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.570478 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.570604 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.570500 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-web-config\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.570604 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.570522 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-oauth-config\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.570604 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.570551 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.570604 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.570583 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2czmp\" (UniqueName: \"kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-kube-api-access-2czmp\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.571500 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571472 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.571625 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571520 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-config\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.571625 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571538 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.571625 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571599 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-service-ca\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.571784 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571626 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-oauth-serving-cert\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.571784 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571655 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-config-out\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.571784 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571685 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bw4vs\" (UniqueName: \"kubernetes.io/projected/a8248209-1c6a-4310-975e-5d3dd74f7b55-kube-api-access-bw4vs\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.571784 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571720 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.571784 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571755 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-config-volume\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.572051 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571792 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.572051 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571800 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-tls-assets\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.572051 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.571887 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.573299 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.573264 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-oauth-serving-cert\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.574027 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.573877 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-service-ca\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.574027 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.573981 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-serving-cert\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.574448 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.574424 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.574750 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.574727 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.575574 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.575552 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-web-config\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.577748 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.577718 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.578351 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.577958 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-config-out\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.578351 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.578202 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-config-volume\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.578351 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.578311 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-oauth-config\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.578586 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.578545 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.579666 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.579516 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.580796 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.580771 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.581214 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.581154 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-tls-assets\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.583824 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.583783 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw4vs\" (UniqueName: \"kubernetes.io/projected/a8248209-1c6a-4310-975e-5d3dd74f7b55-kube-api-access-bw4vs\") pod \"console-5d457dc5fd-vdzvv\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.584107 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.584088 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2czmp\" (UniqueName: \"kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-kube-api-access-2czmp\") pod \"alertmanager-main-0\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:26.696247 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.696219 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:26.735998 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:26.735971 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:41:27.372707 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:27.372670 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d457dc5fd-vdzvv"] Apr 16 17:41:27.399387 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:27.399353 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 17:41:27.425827 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:41:27.425795 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30da5db6_df8d_4158_9c24_37383fad0034.slice/crio-e2c8cd0e7fdd5f7a4a92750274f4bc0eee6faa191a21d4a7ada5ac7345f054f9 WatchSource:0}: Error finding container e2c8cd0e7fdd5f7a4a92750274f4bc0eee6faa191a21d4a7ada5ac7345f054f9: Status 404 returned error can't find the container with id e2c8cd0e7fdd5f7a4a92750274f4bc0eee6faa191a21d4a7ada5ac7345f054f9 Apr 16 17:41:27.426433 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:41:27.426402 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8248209_1c6a_4310_975e_5d3dd74f7b55.slice/crio-508a8c97d7f8891fb6815b57fd6b65b23d4a760ee638489bcc73f9ab74a57277 WatchSource:0}: Error finding container 508a8c97d7f8891fb6815b57fd6b65b23d4a760ee638489bcc73f9ab74a57277: Status 404 returned error can't find the container with id 508a8c97d7f8891fb6815b57fd6b65b23d4a760ee638489bcc73f9ab74a57277 Apr 16 17:41:28.401351 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:28.401314 2560 generic.go:358] "Generic (PLEG): container finished" podID="c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5" containerID="9e55328d4cad7a12c0e64f7ae193bce5f24fd0c3d2b90e4ee591cddb88460029" exitCode=0 Apr 16 17:41:28.401809 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:28.401405 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-vglhr" event={"ID":"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5","Type":"ContainerDied","Data":"9e55328d4cad7a12c0e64f7ae193bce5f24fd0c3d2b90e4ee591cddb88460029"} Apr 16 17:41:28.404484 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:28.403862 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" event={"ID":"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408","Type":"ContainerStarted","Data":"e94b7c032abbae4e012010c65caaf19c9b0a65e8ac687ce41a510ba8a043fbf6"} Apr 16 17:41:28.404484 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:28.403897 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" event={"ID":"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408","Type":"ContainerStarted","Data":"3409a6184d9daa40a89f3f1471b8e4f7ac990f09370525ba371c23b569db5835"} Apr 16 17:41:28.404484 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:28.403912 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" event={"ID":"8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408","Type":"ContainerStarted","Data":"95f6650beebeafd9132df9f66b31839ebe805298cacf899c8488657e8c9909a9"} Apr 16 17:41:28.405341 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:28.405309 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d457dc5fd-vdzvv" event={"ID":"a8248209-1c6a-4310-975e-5d3dd74f7b55","Type":"ContainerStarted","Data":"508a8c97d7f8891fb6815b57fd6b65b23d4a760ee638489bcc73f9ab74a57277"} Apr 16 17:41:28.406680 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:28.406642 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerStarted","Data":"e2c8cd0e7fdd5f7a4a92750274f4bc0eee6faa191a21d4a7ada5ac7345f054f9"} Apr 16 17:41:28.409023 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:28.408996 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" event={"ID":"93b79d64-b19d-4b29-820a-e2c33293ae57","Type":"ContainerStarted","Data":"d1e039da6af3dee9e33dfa029ec3a1dd5e98217c87c02cbd022cd3cfc4a16663"} Apr 16 17:41:28.447236 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:28.447134 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5669946b84-n8w5r" podStartSLOduration=1.928255245 podStartE2EDuration="3.447115106s" podCreationTimestamp="2026-04-16 17:41:25 +0000 UTC" firstStartedPulling="2026-04-16 17:41:25.905176903 +0000 UTC m=+54.268385754" lastFinishedPulling="2026-04-16 17:41:27.424036764 +0000 UTC m=+55.787245615" observedRunningTime="2026-04-16 17:41:28.446342998 +0000 UTC m=+56.809551883" watchObservedRunningTime="2026-04-16 17:41:28.447115106 +0000 UTC m=+56.810323980" Apr 16 17:41:28.469758 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:28.469701 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7479c89684-fh86k" podStartSLOduration=1.836294694 podStartE2EDuration="3.469684738s" podCreationTimestamp="2026-04-16 17:41:25 +0000 UTC" firstStartedPulling="2026-04-16 17:41:25.793789735 +0000 UTC m=+54.156998587" lastFinishedPulling="2026-04-16 17:41:27.427179769 +0000 UTC m=+55.790388631" observedRunningTime="2026-04-16 17:41:28.468577599 +0000 UTC m=+56.831786573" watchObservedRunningTime="2026-04-16 17:41:28.469684738 +0000 UTC m=+56.832893612" Apr 16 17:41:29.311140 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:29.311108 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-94t5h" Apr 16 17:41:29.413585 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:29.413553 2560 generic.go:358] "Generic (PLEG): container finished" podID="30da5db6-df8d-4158-9c24-37383fad0034" containerID="6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17" exitCode=0 Apr 16 17:41:29.414030 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:29.413644 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerDied","Data":"6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17"} Apr 16 17:41:29.416014 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:29.415992 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-vglhr" event={"ID":"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5","Type":"ContainerStarted","Data":"d19481773850ff871fd83eb69364a2696983c573b6bc06d570f46eabeb7010f7"} Apr 16 17:41:29.416119 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:29.416023 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-vglhr" event={"ID":"c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5","Type":"ContainerStarted","Data":"6fdec600ed702ff783a166dfd654d18616695ffa96c95c1047477d7cba924ef0"} Apr 16 17:41:31.422723 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:31.422698 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d457dc5fd-vdzvv" event={"ID":"a8248209-1c6a-4310-975e-5d3dd74f7b55","Type":"ContainerStarted","Data":"70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16"} Apr 16 17:41:31.442399 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:31.442341 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-vglhr" podStartSLOduration=5.243413674 podStartE2EDuration="6.442322516s" podCreationTimestamp="2026-04-16 17:41:25 +0000 UTC" firstStartedPulling="2026-04-16 17:41:26.277854887 +0000 UTC m=+54.641063738" lastFinishedPulling="2026-04-16 17:41:27.476763729 +0000 UTC m=+55.839972580" observedRunningTime="2026-04-16 17:41:29.459346606 +0000 UTC m=+57.822555463" watchObservedRunningTime="2026-04-16 17:41:31.442322516 +0000 UTC m=+59.805531390" Apr 16 17:41:32.428195 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:32.428152 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerStarted","Data":"20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696"} Apr 16 17:41:32.428543 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:32.428201 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerStarted","Data":"24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a"} Apr 16 17:41:32.428543 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:32.428215 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerStarted","Data":"d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc"} Apr 16 17:41:32.428543 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:32.428227 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerStarted","Data":"d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3"} Apr 16 17:41:32.428543 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:32.428240 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerStarted","Data":"d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5"} Apr 16 17:41:32.428543 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:32.428251 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerStarted","Data":"2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8"} Apr 16 17:41:32.456945 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:32.456892 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d457dc5fd-vdzvv" podStartSLOduration=3.421604077 podStartE2EDuration="6.456876329s" podCreationTimestamp="2026-04-16 17:41:26 +0000 UTC" firstStartedPulling="2026-04-16 17:41:27.428468241 +0000 UTC m=+55.791677097" lastFinishedPulling="2026-04-16 17:41:30.463740498 +0000 UTC m=+58.826949349" observedRunningTime="2026-04-16 17:41:31.441559469 +0000 UTC m=+59.804768370" watchObservedRunningTime="2026-04-16 17:41:32.456876329 +0000 UTC m=+60.820085203" Apr 16 17:41:32.457113 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:32.456996 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=1.571689268 podStartE2EDuration="6.456992257s" podCreationTimestamp="2026-04-16 17:41:26 +0000 UTC" firstStartedPulling="2026-04-16 17:41:27.427933104 +0000 UTC m=+55.791141956" lastFinishedPulling="2026-04-16 17:41:32.313236091 +0000 UTC m=+60.676444945" observedRunningTime="2026-04-16 17:41:32.455998069 +0000 UTC m=+60.819206942" watchObservedRunningTime="2026-04-16 17:41:32.456992257 +0000 UTC m=+60.820201129" Apr 16 17:41:33.377190 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:33.377159 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sfb6x" Apr 16 17:41:35.359740 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.359709 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-668c9bd5c9-6gcrb"] Apr 16 17:41:35.363053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.363036 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.371053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.371032 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 16 17:41:35.376676 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.376636 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-668c9bd5c9-6gcrb"] Apr 16 17:41:35.455026 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.454993 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-config\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.455198 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.455053 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-trusted-ca-bundle\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.455198 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.455096 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-serving-cert\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.455198 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.455112 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-oauth-serving-cert\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.455198 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.455136 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxd8p\" (UniqueName: \"kubernetes.io/projected/310fc5b9-2115-425a-8ff6-114dd19aeeef-kube-api-access-sxd8p\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.455198 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.455160 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-oauth-config\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.455198 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.455185 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-service-ca\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.556391 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.556357 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-service-ca\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.556511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.556415 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-config\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.556511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.556456 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-trusted-ca-bundle\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.556511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.556481 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-serving-cert\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.556511 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.556496 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-oauth-serving-cert\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.556646 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.556518 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sxd8p\" (UniqueName: \"kubernetes.io/projected/310fc5b9-2115-425a-8ff6-114dd19aeeef-kube-api-access-sxd8p\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.556646 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.556557 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-oauth-config\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.557221 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.557191 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-service-ca\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.557358 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.557246 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-oauth-serving-cert\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.557406 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.557371 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-config\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.557442 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.557414 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-trusted-ca-bundle\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.559026 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.559005 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-oauth-config\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.559130 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.559108 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-serving-cert\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.565506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.565477 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxd8p\" (UniqueName: \"kubernetes.io/projected/310fc5b9-2115-425a-8ff6-114dd19aeeef-kube-api-access-sxd8p\") pod \"console-668c9bd5c9-6gcrb\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.672623 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.672552 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:35.789135 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:35.789106 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-668c9bd5c9-6gcrb"] Apr 16 17:41:35.792004 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:41:35.791968 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod310fc5b9_2115_425a_8ff6_114dd19aeeef.slice/crio-41fede8902c4d1d64f6b6ae29feef8f01b2f819f4ac2ee7e598e4c8f1180c7bc WatchSource:0}: Error finding container 41fede8902c4d1d64f6b6ae29feef8f01b2f819f4ac2ee7e598e4c8f1180c7bc: Status 404 returned error can't find the container with id 41fede8902c4d1d64f6b6ae29feef8f01b2f819f4ac2ee7e598e4c8f1180c7bc Apr 16 17:41:36.442682 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:36.442648 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-668c9bd5c9-6gcrb" event={"ID":"310fc5b9-2115-425a-8ff6-114dd19aeeef","Type":"ContainerStarted","Data":"48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4"} Apr 16 17:41:36.442682 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:36.442685 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-668c9bd5c9-6gcrb" event={"ID":"310fc5b9-2115-425a-8ff6-114dd19aeeef","Type":"ContainerStarted","Data":"41fede8902c4d1d64f6b6ae29feef8f01b2f819f4ac2ee7e598e4c8f1180c7bc"} Apr 16 17:41:36.460181 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:36.460132 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-668c9bd5c9-6gcrb" podStartSLOduration=1.460115158 podStartE2EDuration="1.460115158s" podCreationTimestamp="2026-04-16 17:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:41:36.459256285 +0000 UTC m=+64.822465158" watchObservedRunningTime="2026-04-16 17:41:36.460115158 +0000 UTC m=+64.823324031" Apr 16 17:41:36.697278 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:36.697200 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:36.697278 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:36.697245 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:36.702137 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:36.702117 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:37.449364 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:37.449339 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:41:37.878270 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:37.878240 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:37.880715 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:37.880699 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 17:41:37.890810 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:37.890783 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47012ffa-3deb-41b8-b770-fc4db562d87e-metrics-certs\") pod \"network-metrics-daemon-lx5nt\" (UID: \"47012ffa-3deb-41b8-b770-fc4db562d87e\") " pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:37.979424 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:37.979384 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sghmd\" (UniqueName: \"kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd\") pod \"network-check-target-cszgd\" (UID: \"29dc29ef-4848-44b6-bfa3-4a7545e874ce\") " pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:37.982218 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:37.982198 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 17:41:37.988022 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:37.988002 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-jmq5l\"" Apr 16 17:41:37.992435 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:37.992417 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 17:41:37.997013 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:37.996999 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lx5nt" Apr 16 17:41:38.003229 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:38.003209 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sghmd\" (UniqueName: \"kubernetes.io/projected/29dc29ef-4848-44b6-bfa3-4a7545e874ce-kube-api-access-sghmd\") pod \"network-check-target-cszgd\" (UID: \"29dc29ef-4848-44b6-bfa3-4a7545e874ce\") " pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:38.119651 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:38.119615 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lx5nt"] Apr 16 17:41:38.123255 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:41:38.123178 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47012ffa_3deb_41b8_b770_fc4db562d87e.slice/crio-cfd3179545f510a7b8d8ab71aaf0f88f86d1c94c58eb5c68352aa5cb01b63dd9 WatchSource:0}: Error finding container cfd3179545f510a7b8d8ab71aaf0f88f86d1c94c58eb5c68352aa5cb01b63dd9: Status 404 returned error can't find the container with id cfd3179545f510a7b8d8ab71aaf0f88f86d1c94c58eb5c68352aa5cb01b63dd9 Apr 16 17:41:38.282660 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:38.282633 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-wmwn6\"" Apr 16 17:41:38.291449 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:38.291422 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:38.426547 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:38.424875 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-cszgd"] Apr 16 17:41:38.431512 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:41:38.431488 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29dc29ef_4848_44b6_bfa3_4a7545e874ce.slice/crio-4bc41e2bb1a8d466f3586b2359f816aa6510ed139c92f39453c8293404b97fc5 WatchSource:0}: Error finding container 4bc41e2bb1a8d466f3586b2359f816aa6510ed139c92f39453c8293404b97fc5: Status 404 returned error can't find the container with id 4bc41e2bb1a8d466f3586b2359f816aa6510ed139c92f39453c8293404b97fc5 Apr 16 17:41:38.448327 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:38.448292 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-cszgd" event={"ID":"29dc29ef-4848-44b6-bfa3-4a7545e874ce","Type":"ContainerStarted","Data":"4bc41e2bb1a8d466f3586b2359f816aa6510ed139c92f39453c8293404b97fc5"} Apr 16 17:41:38.449282 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:38.449261 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lx5nt" event={"ID":"47012ffa-3deb-41b8-b770-fc4db562d87e","Type":"ContainerStarted","Data":"cfd3179545f510a7b8d8ab71aaf0f88f86d1c94c58eb5c68352aa5cb01b63dd9"} Apr 16 17:41:39.457944 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:39.457869 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lx5nt" event={"ID":"47012ffa-3deb-41b8-b770-fc4db562d87e","Type":"ContainerStarted","Data":"b27dc3e034b41b4e3a2012aa01da94f0ddec50c2e90d5db63324ed3757c3c432"} Apr 16 17:41:40.463691 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:40.463647 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lx5nt" event={"ID":"47012ffa-3deb-41b8-b770-fc4db562d87e","Type":"ContainerStarted","Data":"c5178a7f003b432b361d3c8cbf3783c8991152068f5c67c8c291d9f0823f3bd4"} Apr 16 17:41:40.482905 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:40.482857 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lx5nt" podStartSLOduration=67.332899537 podStartE2EDuration="1m8.482819785s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:41:38.125358441 +0000 UTC m=+66.488567294" lastFinishedPulling="2026-04-16 17:41:39.275278675 +0000 UTC m=+67.638487542" observedRunningTime="2026-04-16 17:41:40.482084768 +0000 UTC m=+68.845293642" watchObservedRunningTime="2026-04-16 17:41:40.482819785 +0000 UTC m=+68.846028658" Apr 16 17:41:41.367853 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:41.367756 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-c8cf4fc8d-nd8l8" Apr 16 17:41:41.468302 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:41.468266 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-cszgd" event={"ID":"29dc29ef-4848-44b6-bfa3-4a7545e874ce","Type":"ContainerStarted","Data":"9263d4b7c240a553b63861181b65d8cb69efa43ea1a99e8ad3e32971c83e646c"} Apr 16 17:41:41.468668 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:41.468458 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:41:41.489021 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:41.488968 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-cszgd" podStartSLOduration=66.804279 podStartE2EDuration="1m9.488950852s" podCreationTimestamp="2026-04-16 17:40:32 +0000 UTC" firstStartedPulling="2026-04-16 17:41:38.433284597 +0000 UTC m=+66.796493451" lastFinishedPulling="2026-04-16 17:41:41.117956438 +0000 UTC m=+69.481165303" observedRunningTime="2026-04-16 17:41:41.487999823 +0000 UTC m=+69.851208707" watchObservedRunningTime="2026-04-16 17:41:41.488950852 +0000 UTC m=+69.852159728" Apr 16 17:41:44.742199 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:44.742152 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:44.745302 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:44.745282 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 16 17:41:44.755284 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:44.755255 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5-original-pull-secret\") pod \"global-pull-secret-syncer-hzdjn\" (UID: \"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5\") " pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:44.891488 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:44.891456 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-hzdjn" Apr 16 17:41:45.009338 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:45.009315 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-hzdjn"] Apr 16 17:41:45.011602 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:41:45.011580 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fb1c6cc_5a6d_4f3f_95f2_3f46be10eda5.slice/crio-ad5d00718eb1256b38e5d7dd57dd8f7253e51d6199be9437145f76d051e7ba9c WatchSource:0}: Error finding container ad5d00718eb1256b38e5d7dd57dd8f7253e51d6199be9437145f76d051e7ba9c: Status 404 returned error can't find the container with id ad5d00718eb1256b38e5d7dd57dd8f7253e51d6199be9437145f76d051e7ba9c Apr 16 17:41:45.480395 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:45.480360 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-hzdjn" event={"ID":"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5","Type":"ContainerStarted","Data":"ad5d00718eb1256b38e5d7dd57dd8f7253e51d6199be9437145f76d051e7ba9c"} Apr 16 17:41:45.673020 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:45.672986 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:45.673020 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:45.673027 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:45.678689 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:45.678665 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:46.488301 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:46.488268 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:41:46.546799 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:46.546768 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d457dc5fd-vdzvv"] Apr 16 17:41:49.496766 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:49.496723 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-hzdjn" event={"ID":"0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5","Type":"ContainerStarted","Data":"e08c0ab88c326e4d99e9eb43f72208beffd59eed6978836137a3829701fb1122"} Apr 16 17:41:49.515231 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:41:49.515171 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-hzdjn" podStartSLOduration=66.033499428 podStartE2EDuration="1m9.515153303s" podCreationTimestamp="2026-04-16 17:40:40 +0000 UTC" firstStartedPulling="2026-04-16 17:41:45.013327343 +0000 UTC m=+73.376536194" lastFinishedPulling="2026-04-16 17:41:48.494981217 +0000 UTC m=+76.858190069" observedRunningTime="2026-04-16 17:41:49.51391143 +0000 UTC m=+77.877120303" watchObservedRunningTime="2026-04-16 17:41:49.515153303 +0000 UTC m=+77.878362176" Apr 16 17:42:11.572542 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.572479 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-5d457dc5fd-vdzvv" podUID="a8248209-1c6a-4310-975e-5d3dd74f7b55" containerName="console" containerID="cri-o://70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16" gracePeriod=15 Apr 16 17:42:11.817925 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.817902 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d457dc5fd-vdzvv_a8248209-1c6a-4310-975e-5d3dd74f7b55/console/0.log" Apr 16 17:42:11.818057 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.817974 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:42:11.860439 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.860353 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw4vs\" (UniqueName: \"kubernetes.io/projected/a8248209-1c6a-4310-975e-5d3dd74f7b55-kube-api-access-bw4vs\") pod \"a8248209-1c6a-4310-975e-5d3dd74f7b55\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " Apr 16 17:42:11.860439 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.860393 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-service-ca\") pod \"a8248209-1c6a-4310-975e-5d3dd74f7b55\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " Apr 16 17:42:11.860439 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.860422 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-oauth-config\") pod \"a8248209-1c6a-4310-975e-5d3dd74f7b55\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " Apr 16 17:42:11.860661 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.860551 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-config\") pod \"a8248209-1c6a-4310-975e-5d3dd74f7b55\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " Apr 16 17:42:11.860661 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.860592 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-oauth-serving-cert\") pod \"a8248209-1c6a-4310-975e-5d3dd74f7b55\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " Apr 16 17:42:11.860661 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.860632 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-serving-cert\") pod \"a8248209-1c6a-4310-975e-5d3dd74f7b55\" (UID: \"a8248209-1c6a-4310-975e-5d3dd74f7b55\") " Apr 16 17:42:11.860933 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.860905 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-service-ca" (OuterVolumeSpecName: "service-ca") pod "a8248209-1c6a-4310-975e-5d3dd74f7b55" (UID: "a8248209-1c6a-4310-975e-5d3dd74f7b55"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:42:11.861061 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.860969 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-config" (OuterVolumeSpecName: "console-config") pod "a8248209-1c6a-4310-975e-5d3dd74f7b55" (UID: "a8248209-1c6a-4310-975e-5d3dd74f7b55"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:42:11.861134 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.861074 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a8248209-1c6a-4310-975e-5d3dd74f7b55" (UID: "a8248209-1c6a-4310-975e-5d3dd74f7b55"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:42:11.863179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.863151 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8248209-1c6a-4310-975e-5d3dd74f7b55-kube-api-access-bw4vs" (OuterVolumeSpecName: "kube-api-access-bw4vs") pod "a8248209-1c6a-4310-975e-5d3dd74f7b55" (UID: "a8248209-1c6a-4310-975e-5d3dd74f7b55"). InnerVolumeSpecName "kube-api-access-bw4vs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:42:11.863326 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.863166 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a8248209-1c6a-4310-975e-5d3dd74f7b55" (UID: "a8248209-1c6a-4310-975e-5d3dd74f7b55"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:42:11.863326 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.863201 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a8248209-1c6a-4310-975e-5d3dd74f7b55" (UID: "a8248209-1c6a-4310-975e-5d3dd74f7b55"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:42:11.961518 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.961484 2560 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-serving-cert\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:11.961518 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.961510 2560 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bw4vs\" (UniqueName: \"kubernetes.io/projected/a8248209-1c6a-4310-975e-5d3dd74f7b55-kube-api-access-bw4vs\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:11.961518 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.961521 2560 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-service-ca\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:11.961744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.961532 2560 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-oauth-config\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:11.961744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.961542 2560 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-console-config\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:11.961744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:11.961553 2560 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8248209-1c6a-4310-975e-5d3dd74f7b55-oauth-serving-cert\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:12.474003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.473920 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-cszgd" Apr 16 17:42:12.563014 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.562989 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d457dc5fd-vdzvv_a8248209-1c6a-4310-975e-5d3dd74f7b55/console/0.log" Apr 16 17:42:12.563191 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.563029 2560 generic.go:358] "Generic (PLEG): container finished" podID="a8248209-1c6a-4310-975e-5d3dd74f7b55" containerID="70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16" exitCode=2 Apr 16 17:42:12.563191 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.563069 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d457dc5fd-vdzvv" event={"ID":"a8248209-1c6a-4310-975e-5d3dd74f7b55","Type":"ContainerDied","Data":"70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16"} Apr 16 17:42:12.563191 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.563099 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d457dc5fd-vdzvv" Apr 16 17:42:12.563191 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.563111 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d457dc5fd-vdzvv" event={"ID":"a8248209-1c6a-4310-975e-5d3dd74f7b55","Type":"ContainerDied","Data":"508a8c97d7f8891fb6815b57fd6b65b23d4a760ee638489bcc73f9ab74a57277"} Apr 16 17:42:12.563191 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.563138 2560 scope.go:117] "RemoveContainer" containerID="70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16" Apr 16 17:42:12.572266 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.572249 2560 scope.go:117] "RemoveContainer" containerID="70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16" Apr 16 17:42:12.572532 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:42:12.572512 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16\": container with ID starting with 70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16 not found: ID does not exist" containerID="70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16" Apr 16 17:42:12.572582 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.572541 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16"} err="failed to get container status \"70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16\": rpc error: code = NotFound desc = could not find container \"70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16\": container with ID starting with 70147125cbc085ff0cfba15dcf25d246a4dde206eb6398547bba44e7937e8a16 not found: ID does not exist" Apr 16 17:42:12.582365 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.582335 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d457dc5fd-vdzvv"] Apr 16 17:42:12.588571 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:12.588550 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d457dc5fd-vdzvv"] Apr 16 17:42:14.161291 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:14.161254 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8248209-1c6a-4310-975e-5d3dd74f7b55" path="/var/lib/kubelet/pods/a8248209-1c6a-4310-975e-5d3dd74f7b55/volumes" Apr 16 17:42:15.104781 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.104734 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 17:42:15.105866 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.105790 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="alertmanager" containerID="cri-o://2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8" gracePeriod=120 Apr 16 17:42:15.108884 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.106097 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy" containerID="cri-o://d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc" gracePeriod=120 Apr 16 17:42:15.108884 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.106229 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="prom-label-proxy" containerID="cri-o://20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696" gracePeriod=120 Apr 16 17:42:15.108884 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.106244 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy-web" containerID="cri-o://d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3" gracePeriod=120 Apr 16 17:42:15.108884 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.106317 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy-metric" containerID="cri-o://24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a" gracePeriod=120 Apr 16 17:42:15.108884 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.106319 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="config-reloader" containerID="cri-o://d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5" gracePeriod=120 Apr 16 17:42:15.576058 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.576026 2560 generic.go:358] "Generic (PLEG): container finished" podID="30da5db6-df8d-4158-9c24-37383fad0034" containerID="20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696" exitCode=0 Apr 16 17:42:15.576058 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.576052 2560 generic.go:358] "Generic (PLEG): container finished" podID="30da5db6-df8d-4158-9c24-37383fad0034" containerID="24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a" exitCode=0 Apr 16 17:42:15.576058 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.576059 2560 generic.go:358] "Generic (PLEG): container finished" podID="30da5db6-df8d-4158-9c24-37383fad0034" containerID="d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc" exitCode=0 Apr 16 17:42:15.576058 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.576065 2560 generic.go:358] "Generic (PLEG): container finished" podID="30da5db6-df8d-4158-9c24-37383fad0034" containerID="d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5" exitCode=0 Apr 16 17:42:15.576058 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.576070 2560 generic.go:358] "Generic (PLEG): container finished" podID="30da5db6-df8d-4158-9c24-37383fad0034" containerID="2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8" exitCode=0 Apr 16 17:42:15.576546 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.576102 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerDied","Data":"20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696"} Apr 16 17:42:15.576546 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.576136 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerDied","Data":"24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a"} Apr 16 17:42:15.576546 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.576145 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerDied","Data":"d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc"} Apr 16 17:42:15.576546 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.576155 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerDied","Data":"d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5"} Apr 16 17:42:15.576546 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:15.576164 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerDied","Data":"2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8"} Apr 16 17:42:16.350957 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.350933 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.395496 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.395460 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-config-out\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.395659 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.395519 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-metric\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.395659 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.395558 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.395659 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.395585 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-cluster-tls-config\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.395993 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.395906 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2czmp\" (UniqueName: \"kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-kube-api-access-2czmp\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.396223 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.396166 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-tls-assets\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.396769 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.396563 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-web-config\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.396769 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.396608 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-trusted-ca-bundle\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.396769 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.396647 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-main-db\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.396769 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.396676 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-config-volume\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.396769 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.396714 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-main-tls\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.397129 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.397088 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:42:16.397802 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.397222 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-web\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.397802 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.397277 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-metrics-client-ca\") pod \"30da5db6-df8d-4158-9c24-37383fad0034\" (UID: \"30da5db6-df8d-4158-9c24-37383fad0034\") " Apr 16 17:42:16.397802 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.397472 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:42:16.397802 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.397525 2560 reconciler_common.go:299] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-main-db\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.399503 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.399475 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:42:16.399878 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.399822 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:42:16.400294 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.400265 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-config-out" (OuterVolumeSpecName: "config-out") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 17:42:16.400415 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.400314 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:42:16.400692 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.400654 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:42:16.401154 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.401116 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:42:16.401248 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.401164 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:42:16.401439 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.401413 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-kube-api-access-2czmp" (OuterVolumeSpecName: "kube-api-access-2czmp") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "kube-api-access-2czmp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:42:16.401815 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.401790 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-config-volume" (OuterVolumeSpecName: "config-volume") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:42:16.403944 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.403877 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-cluster-tls-config" (OuterVolumeSpecName: "cluster-tls-config") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "cluster-tls-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:42:16.410461 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.410439 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-web-config" (OuterVolumeSpecName: "web-config") pod "30da5db6-df8d-4158-9c24-37383fad0034" (UID: "30da5db6-df8d-4158-9c24-37383fad0034"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:42:16.497988 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.497957 2560 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-config-volume\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.497988 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.497983 2560 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-main-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.497988 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.497995 2560 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-web\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.498197 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.498006 2560 reconciler_common.go:299] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-metrics-client-ca\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.498197 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.498015 2560 reconciler_common.go:299] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30da5db6-df8d-4158-9c24-37383fad0034-config-out\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.498197 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.498024 2560 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy-metric\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.498197 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.498033 2560 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-secret-alertmanager-kube-rbac-proxy\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.498197 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.498041 2560 reconciler_common.go:299] "Volume detached for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-cluster-tls-config\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.498197 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.498050 2560 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2czmp\" (UniqueName: \"kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-kube-api-access-2czmp\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.498197 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.498059 2560 reconciler_common.go:299] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30da5db6-df8d-4158-9c24-37383fad0034-tls-assets\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.498197 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.498068 2560 reconciler_common.go:299] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30da5db6-df8d-4158-9c24-37383fad0034-web-config\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.498197 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.498076 2560 reconciler_common.go:299] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30da5db6-df8d-4158-9c24-37383fad0034-alertmanager-trusted-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:42:16.582432 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.582401 2560 generic.go:358] "Generic (PLEG): container finished" podID="30da5db6-df8d-4158-9c24-37383fad0034" containerID="d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3" exitCode=0 Apr 16 17:42:16.582772 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.582447 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerDied","Data":"d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3"} Apr 16 17:42:16.582772 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.582477 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"30da5db6-df8d-4158-9c24-37383fad0034","Type":"ContainerDied","Data":"e2c8cd0e7fdd5f7a4a92750274f4bc0eee6faa191a21d4a7ada5ac7345f054f9"} Apr 16 17:42:16.582772 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.582493 2560 scope.go:117] "RemoveContainer" containerID="20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696" Apr 16 17:42:16.582772 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.582503 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.590170 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.590154 2560 scope.go:117] "RemoveContainer" containerID="24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a" Apr 16 17:42:16.596907 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.596888 2560 scope.go:117] "RemoveContainer" containerID="d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc" Apr 16 17:42:16.603127 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.603106 2560 scope.go:117] "RemoveContainer" containerID="d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3" Apr 16 17:42:16.609339 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.609215 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 17:42:16.609503 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.609395 2560 scope.go:117] "RemoveContainer" containerID="d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5" Apr 16 17:42:16.614746 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.614727 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 17:42:16.617023 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.617006 2560 scope.go:117] "RemoveContainer" containerID="2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8" Apr 16 17:42:16.622988 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.622970 2560 scope.go:117] "RemoveContainer" containerID="6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17" Apr 16 17:42:16.628857 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.628828 2560 scope.go:117] "RemoveContainer" containerID="20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696" Apr 16 17:42:16.629092 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:42:16.629075 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696\": container with ID starting with 20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696 not found: ID does not exist" containerID="20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696" Apr 16 17:42:16.629157 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.629105 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696"} err="failed to get container status \"20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696\": rpc error: code = NotFound desc = could not find container \"20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696\": container with ID starting with 20b615ef0c4546160ce8566a0118f25524a9c3dd2eeb91873eb4a9f02f020696 not found: ID does not exist" Apr 16 17:42:16.629157 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.629128 2560 scope.go:117] "RemoveContainer" containerID="24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a" Apr 16 17:42:16.629371 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:42:16.629352 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a\": container with ID starting with 24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a not found: ID does not exist" containerID="24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a" Apr 16 17:42:16.629410 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.629379 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a"} err="failed to get container status \"24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a\": rpc error: code = NotFound desc = could not find container \"24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a\": container with ID starting with 24e522c94f53a9c9c1d32c66d4028bbcc3a03f2570d6d4665b8dd1d275c5727a not found: ID does not exist" Apr 16 17:42:16.629410 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.629396 2560 scope.go:117] "RemoveContainer" containerID="d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc" Apr 16 17:42:16.629591 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:42:16.629573 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc\": container with ID starting with d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc not found: ID does not exist" containerID="d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc" Apr 16 17:42:16.629652 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.629597 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc"} err="failed to get container status \"d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc\": rpc error: code = NotFound desc = could not find container \"d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc\": container with ID starting with d58744086dd88ffdd81437335c7427f94abe59808dbaf6644909183f91fc77dc not found: ID does not exist" Apr 16 17:42:16.629652 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.629619 2560 scope.go:117] "RemoveContainer" containerID="d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3" Apr 16 17:42:16.629852 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:42:16.629822 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3\": container with ID starting with d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3 not found: ID does not exist" containerID="d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3" Apr 16 17:42:16.629946 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.629857 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3"} err="failed to get container status \"d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3\": rpc error: code = NotFound desc = could not find container \"d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3\": container with ID starting with d9aeea1274456dc428705fe027bde83cefabe1bdb46de11d77b6f590bb18f1e3 not found: ID does not exist" Apr 16 17:42:16.629946 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.629870 2560 scope.go:117] "RemoveContainer" containerID="d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5" Apr 16 17:42:16.630085 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:42:16.630068 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5\": container with ID starting with d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5 not found: ID does not exist" containerID="d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5" Apr 16 17:42:16.630140 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.630092 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5"} err="failed to get container status \"d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5\": rpc error: code = NotFound desc = could not find container \"d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5\": container with ID starting with d0fee3ae8bd2c67317cf84e96b94f1d8909715218efff01a4fb8a3fc89fe7bd5 not found: ID does not exist" Apr 16 17:42:16.630140 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.630112 2560 scope.go:117] "RemoveContainer" containerID="2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8" Apr 16 17:42:16.630316 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:42:16.630302 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8\": container with ID starting with 2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8 not found: ID does not exist" containerID="2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8" Apr 16 17:42:16.630372 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.630321 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8"} err="failed to get container status \"2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8\": rpc error: code = NotFound desc = could not find container \"2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8\": container with ID starting with 2e01b993c564460d1c882e3567b53ee216af9722a255806d5d428b366d960ec8 not found: ID does not exist" Apr 16 17:42:16.630372 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.630339 2560 scope.go:117] "RemoveContainer" containerID="6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17" Apr 16 17:42:16.630547 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:42:16.630531 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17\": container with ID starting with 6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17 not found: ID does not exist" containerID="6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17" Apr 16 17:42:16.630581 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.630551 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17"} err="failed to get container status \"6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17\": rpc error: code = NotFound desc = could not find container \"6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17\": container with ID starting with 6162f378901c985c605a363944d0b30a07d54ccf0a94cfba23ad037c4176ac17 not found: ID does not exist" Apr 16 17:42:16.645525 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645508 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 17:42:16.645775 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645762 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645776 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645785 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="alertmanager" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645791 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="alertmanager" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645797 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="config-reloader" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645803 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="config-reloader" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645808 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="prom-label-proxy" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645813 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="prom-label-proxy" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645846 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy-metric" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645852 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy-metric" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645859 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="init-config-reloader" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645865 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="init-config-reloader" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645872 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a8248209-1c6a-4310-975e-5d3dd74f7b55" containerName="console" Apr 16 17:42:16.645870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645877 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8248209-1c6a-4310-975e-5d3dd74f7b55" containerName="console" Apr 16 17:42:16.646312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645884 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy-web" Apr 16 17:42:16.646312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645889 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy-web" Apr 16 17:42:16.646312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645928 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy-metric" Apr 16 17:42:16.646312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645936 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy" Apr 16 17:42:16.646312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645943 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="config-reloader" Apr 16 17:42:16.646312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645950 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="prom-label-proxy" Apr 16 17:42:16.646312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645957 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="kube-rbac-proxy-web" Apr 16 17:42:16.646312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645963 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="a8248209-1c6a-4310-975e-5d3dd74f7b55" containerName="console" Apr 16 17:42:16.646312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.645969 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="30da5db6-df8d-4158-9c24-37383fad0034" containerName="alertmanager" Apr 16 17:42:16.650989 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.650974 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.653448 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.653427 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 16 17:42:16.653552 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.653494 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 16 17:42:16.653595 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.653547 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 16 17:42:16.653595 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.653562 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 16 17:42:16.653595 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.653565 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 16 17:42:16.653681 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.653499 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 16 17:42:16.653989 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.653944 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-z696p\"" Apr 16 17:42:16.653989 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.653962 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 16 17:42:16.654322 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.654309 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 16 17:42:16.658339 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.658322 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 16 17:42:16.663617 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.663599 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 17:42:16.700186 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700163 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c6e5713-9364-457b-a7c0-83a04ea458a8-config-out\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700194 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-config-volume\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700212 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbgb\" (UniqueName: \"kubernetes.io/projected/5c6e5713-9364-457b-a7c0-83a04ea458a8-kube-api-access-9wbgb\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700231 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700258 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700291 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c6e5713-9364-457b-a7c0-83a04ea458a8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700459 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700312 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5c6e5713-9364-457b-a7c0-83a04ea458a8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700459 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700379 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c6e5713-9364-457b-a7c0-83a04ea458a8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700459 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700411 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-web-config\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700459 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700435 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6e5713-9364-457b-a7c0-83a04ea458a8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700459 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700456 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700639 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700475 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.700639 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.700513 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.801129 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801094 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c6e5713-9364-457b-a7c0-83a04ea458a8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.801295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801140 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-web-config\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.801295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801170 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6e5713-9364-457b-a7c0-83a04ea458a8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.801295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801198 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.801295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801248 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.801477 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801372 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.801477 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801425 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c6e5713-9364-457b-a7c0-83a04ea458a8-config-out\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.802744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801455 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-config-volume\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.802744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801877 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbgb\" (UniqueName: \"kubernetes.io/projected/5c6e5713-9364-457b-a7c0-83a04ea458a8-kube-api-access-9wbgb\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.802744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801933 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.802744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.801972 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.802744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.802020 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c6e5713-9364-457b-a7c0-83a04ea458a8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.802744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.802060 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5c6e5713-9364-457b-a7c0-83a04ea458a8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.802744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.802411 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5c6e5713-9364-457b-a7c0-83a04ea458a8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.802744 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.802418 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6e5713-9364-457b-a7c0-83a04ea458a8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.804567 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.804541 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-web-config\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.804658 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.804630 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c6e5713-9364-457b-a7c0-83a04ea458a8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.804706 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.804632 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.805483 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.805196 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.805483 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.805352 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c6e5713-9364-457b-a7c0-83a04ea458a8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.805483 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.805376 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.805682 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.805655 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-config-volume\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.806759 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.806731 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.806936 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.806913 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5c6e5713-9364-457b-a7c0-83a04ea458a8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.808616 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.807874 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c6e5713-9364-457b-a7c0-83a04ea458a8-config-out\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.812327 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.812310 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbgb\" (UniqueName: \"kubernetes.io/projected/5c6e5713-9364-457b-a7c0-83a04ea458a8-kube-api-access-9wbgb\") pod \"alertmanager-main-0\" (UID: \"5c6e5713-9364-457b-a7c0-83a04ea458a8\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:16.960289 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:16.960183 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 17:42:17.087318 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:17.087281 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 17:42:17.090458 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:42:17.090426 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c6e5713_9364_457b_a7c0_83a04ea458a8.slice/crio-c12100c12e931060cc667ae058491c8add1cd1da6391132d5ffd9f7cbcb95671 WatchSource:0}: Error finding container c12100c12e931060cc667ae058491c8add1cd1da6391132d5ffd9f7cbcb95671: Status 404 returned error can't find the container with id c12100c12e931060cc667ae058491c8add1cd1da6391132d5ffd9f7cbcb95671 Apr 16 17:42:17.587111 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:17.587068 2560 generic.go:358] "Generic (PLEG): container finished" podID="5c6e5713-9364-457b-a7c0-83a04ea458a8" containerID="544507ab3e23064af9e3c92281b12fd9eebdb1419b984dd0a37a445f37c87bbf" exitCode=0 Apr 16 17:42:17.587581 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:17.587154 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5c6e5713-9364-457b-a7c0-83a04ea458a8","Type":"ContainerDied","Data":"544507ab3e23064af9e3c92281b12fd9eebdb1419b984dd0a37a445f37c87bbf"} Apr 16 17:42:17.587581 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:17.587187 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5c6e5713-9364-457b-a7c0-83a04ea458a8","Type":"ContainerStarted","Data":"c12100c12e931060cc667ae058491c8add1cd1da6391132d5ffd9f7cbcb95671"} Apr 16 17:42:18.162144 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:18.162067 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30da5db6-df8d-4158-9c24-37383fad0034" path="/var/lib/kubelet/pods/30da5db6-df8d-4158-9c24-37383fad0034/volumes" Apr 16 17:42:18.593700 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:18.593663 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5c6e5713-9364-457b-a7c0-83a04ea458a8","Type":"ContainerStarted","Data":"e8ecfc774972be55bd2862675276cadcded95376f3f2b400afc7d891b5b8cf9b"} Apr 16 17:42:18.593700 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:18.593701 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5c6e5713-9364-457b-a7c0-83a04ea458a8","Type":"ContainerStarted","Data":"b2d48f01108403781d9f1d09f6636b85907eda28b664b06007786929835d79cb"} Apr 16 17:42:18.594163 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:18.593710 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5c6e5713-9364-457b-a7c0-83a04ea458a8","Type":"ContainerStarted","Data":"bcb91b454a6710e443d1706b1ba7d65793e70b8b5b554918d2093838a78ff52c"} Apr 16 17:42:18.594163 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:18.593719 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5c6e5713-9364-457b-a7c0-83a04ea458a8","Type":"ContainerStarted","Data":"c2006ca24ec2d4231f44495012925b846feba75e488e050d167bdbfa51da05c2"} Apr 16 17:42:18.594163 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:18.593728 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5c6e5713-9364-457b-a7c0-83a04ea458a8","Type":"ContainerStarted","Data":"23a50259e4606b27a41096cb60a0e2aae986105b962d6c71584b6457c8e6900f"} Apr 16 17:42:18.594163 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:18.593735 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5c6e5713-9364-457b-a7c0-83a04ea458a8","Type":"ContainerStarted","Data":"64a89261df90e91a3d36a773cd60b6f74beba6b15e4e613073ed98d658ada11e"} Apr 16 17:42:18.620460 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:18.620409 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.620395036 podStartE2EDuration="2.620395036s" podCreationTimestamp="2026-04-16 17:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:42:18.619352567 +0000 UTC m=+106.982561453" watchObservedRunningTime="2026-04-16 17:42:18.620395036 +0000 UTC m=+106.983603909" Apr 16 17:42:19.084543 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.084502 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-5fcf864b6-pt2gx"] Apr 16 17:42:19.089035 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.089015 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.091312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.091270 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-dockercfg-v7c2t\"" Apr 16 17:42:19.091312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.091291 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client\"" Apr 16 17:42:19.091312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.091304 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-client-serving-certs-ca-bundle\"" Apr 16 17:42:19.091541 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.091326 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-tls\"" Apr 16 17:42:19.091541 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.091513 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-kube-rbac-proxy-config\"" Apr 16 17:42:19.091613 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.091519 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"federate-client-certs\"" Apr 16 17:42:19.097539 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.097402 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-trusted-ca-bundle-8i12ta5c71j38\"" Apr 16 17:42:19.099494 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.099472 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-5fcf864b6-pt2gx"] Apr 16 17:42:19.123381 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.123352 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.123489 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.123389 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-telemeter-client-tls\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.123489 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.123418 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-federate-client-tls\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.123557 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.123499 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218610e5-804a-40fa-8abb-8b62570db501-serving-certs-ca-bundle\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.123557 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.123530 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsfp\" (UniqueName: \"kubernetes.io/projected/218610e5-804a-40fa-8abb-8b62570db501-kube-api-access-dgsfp\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.123622 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.123557 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-secret-telemeter-client\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.123622 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.123615 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/218610e5-804a-40fa-8abb-8b62570db501-metrics-client-ca\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.123682 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.123639 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218610e5-804a-40fa-8abb-8b62570db501-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.223972 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.223938 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/218610e5-804a-40fa-8abb-8b62570db501-metrics-client-ca\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.224101 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.223982 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218610e5-804a-40fa-8abb-8b62570db501-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.224101 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.224012 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.224101 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.224046 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-telemeter-client-tls\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.224101 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.224089 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-federate-client-tls\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.224320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.224129 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218610e5-804a-40fa-8abb-8b62570db501-serving-certs-ca-bundle\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.224320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.224165 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsfp\" (UniqueName: \"kubernetes.io/projected/218610e5-804a-40fa-8abb-8b62570db501-kube-api-access-dgsfp\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.224320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.224201 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-secret-telemeter-client\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.224796 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.224771 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/218610e5-804a-40fa-8abb-8b62570db501-metrics-client-ca\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.224917 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.224892 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218610e5-804a-40fa-8abb-8b62570db501-serving-certs-ca-bundle\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.224973 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.224919 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218610e5-804a-40fa-8abb-8b62570db501-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.226630 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.226602 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.226949 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.226927 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-secret-telemeter-client\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.226949 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.226942 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-federate-client-tls\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.227059 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.227043 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/218610e5-804a-40fa-8abb-8b62570db501-telemeter-client-tls\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.234822 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.234804 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsfp\" (UniqueName: \"kubernetes.io/projected/218610e5-804a-40fa-8abb-8b62570db501-kube-api-access-dgsfp\") pod \"telemeter-client-5fcf864b6-pt2gx\" (UID: \"218610e5-804a-40fa-8abb-8b62570db501\") " pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.400998 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.400901 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" Apr 16 17:42:19.528324 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.528288 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-5fcf864b6-pt2gx"] Apr 16 17:42:19.533363 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:42:19.533328 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod218610e5_804a_40fa_8abb_8b62570db501.slice/crio-c8fec1154e79bbb0593be41265a50afc130ef40cfdd07fe637d377f5fbe2ce0d WatchSource:0}: Error finding container c8fec1154e79bbb0593be41265a50afc130ef40cfdd07fe637d377f5fbe2ce0d: Status 404 returned error can't find the container with id c8fec1154e79bbb0593be41265a50afc130ef40cfdd07fe637d377f5fbe2ce0d Apr 16 17:42:19.598386 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:19.598355 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" event={"ID":"218610e5-804a-40fa-8abb-8b62570db501","Type":"ContainerStarted","Data":"c8fec1154e79bbb0593be41265a50afc130ef40cfdd07fe637d377f5fbe2ce0d"} Apr 16 17:42:22.616012 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:22.615964 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" event={"ID":"218610e5-804a-40fa-8abb-8b62570db501","Type":"ContainerStarted","Data":"78cb79d22a29bfc664c03b508b60a31f0dc009631d6fe99ba871b54c5bbc58c7"} Apr 16 17:42:22.616012 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:22.616011 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" event={"ID":"218610e5-804a-40fa-8abb-8b62570db501","Type":"ContainerStarted","Data":"e5b70563ccdd92b9408ad88eadc371c185aa281183aae6a47438f1ae809be002"} Apr 16 17:42:22.616506 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:22.616025 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" event={"ID":"218610e5-804a-40fa-8abb-8b62570db501","Type":"ContainerStarted","Data":"2fcfedb0dd5672bd2c8fd8481b9e8004b2f2cd317a1c3a174652ff48a3120a7b"} Apr 16 17:42:22.638113 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:22.638059 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-5fcf864b6-pt2gx" podStartSLOduration=1.5927432430000001 podStartE2EDuration="3.638045412s" podCreationTimestamp="2026-04-16 17:42:19 +0000 UTC" firstStartedPulling="2026-04-16 17:42:19.535316835 +0000 UTC m=+107.898525701" lastFinishedPulling="2026-04-16 17:42:21.580619011 +0000 UTC m=+109.943827870" observedRunningTime="2026-04-16 17:42:22.637390088 +0000 UTC m=+111.000598999" watchObservedRunningTime="2026-04-16 17:42:22.638045412 +0000 UTC m=+111.001254327" Apr 16 17:42:23.369457 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.369421 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-7fff4cf859-4cct7"] Apr 16 17:42:23.373091 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.373068 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.385178 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.385156 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fff4cf859-4cct7"] Apr 16 17:42:23.459106 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.459063 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-oauth-serving-cert\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.459106 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.459110 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-trusted-ca-bundle\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.459337 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.459180 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-config\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.459337 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.459233 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-service-ca\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.459337 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.459260 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-oauth-config\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.459337 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.459320 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz8kj\" (UniqueName: \"kubernetes.io/projected/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-kube-api-access-tz8kj\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.459486 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.459366 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-serving-cert\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.560660 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.560615 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-oauth-serving-cert\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.560660 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.560667 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-trusted-ca-bundle\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.560950 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.560712 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-config\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.560950 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.560736 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-service-ca\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.560950 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.560762 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-oauth-config\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.560950 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.560793 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tz8kj\" (UniqueName: \"kubernetes.io/projected/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-kube-api-access-tz8kj\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.560950 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.560818 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-serving-cert\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.561517 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.561483 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-oauth-serving-cert\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.561617 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.561483 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-config\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.561617 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.561529 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-service-ca\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.561787 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.561764 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-trusted-ca-bundle\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.563194 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.563166 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-oauth-config\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.563298 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.563281 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-serving-cert\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.570172 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.570154 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz8kj\" (UniqueName: \"kubernetes.io/projected/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-kube-api-access-tz8kj\") pod \"console-7fff4cf859-4cct7\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.681518 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.681425 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:23.804587 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:23.804496 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fff4cf859-4cct7"] Apr 16 17:42:23.807110 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:42:23.807074 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb1aeecb_16cd_40b4_a5ad_55b2bd24edf8.slice/crio-496dc76d369b12eedf2249d4a1c836c016308c9e688450b318326208dd3c2f63 WatchSource:0}: Error finding container 496dc76d369b12eedf2249d4a1c836c016308c9e688450b318326208dd3c2f63: Status 404 returned error can't find the container with id 496dc76d369b12eedf2249d4a1c836c016308c9e688450b318326208dd3c2f63 Apr 16 17:42:24.623637 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:24.623606 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fff4cf859-4cct7" event={"ID":"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8","Type":"ContainerStarted","Data":"47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099"} Apr 16 17:42:24.623637 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:24.623642 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fff4cf859-4cct7" event={"ID":"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8","Type":"ContainerStarted","Data":"496dc76d369b12eedf2249d4a1c836c016308c9e688450b318326208dd3c2f63"} Apr 16 17:42:24.642430 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:24.642262 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7fff4cf859-4cct7" podStartSLOduration=1.6422432630000001 podStartE2EDuration="1.642243263s" podCreationTimestamp="2026-04-16 17:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:42:24.641570995 +0000 UTC m=+113.004779870" watchObservedRunningTime="2026-04-16 17:42:24.642243263 +0000 UTC m=+113.005452138" Apr 16 17:42:33.682297 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:33.682261 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:33.682297 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:33.682302 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:33.689133 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:33.689099 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:34.659342 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:34.659310 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:42:34.719111 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:34.718211 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-668c9bd5c9-6gcrb"] Apr 16 17:42:59.745095 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:59.745022 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-668c9bd5c9-6gcrb" podUID="310fc5b9-2115-425a-8ff6-114dd19aeeef" containerName="console" containerID="cri-o://48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4" gracePeriod=15 Apr 16 17:42:59.981396 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:59.981373 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-668c9bd5c9-6gcrb_310fc5b9-2115-425a-8ff6-114dd19aeeef/console/0.log" Apr 16 17:42:59.981500 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:42:59.981432 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:43:00.058370 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.058339 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-oauth-serving-cert\") pod \"310fc5b9-2115-425a-8ff6-114dd19aeeef\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " Apr 16 17:43:00.058514 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.058377 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-serving-cert\") pod \"310fc5b9-2115-425a-8ff6-114dd19aeeef\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " Apr 16 17:43:00.058514 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.058435 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-trusted-ca-bundle\") pod \"310fc5b9-2115-425a-8ff6-114dd19aeeef\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " Apr 16 17:43:00.058514 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.058467 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-oauth-config\") pod \"310fc5b9-2115-425a-8ff6-114dd19aeeef\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " Apr 16 17:43:00.058642 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.058606 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-config\") pod \"310fc5b9-2115-425a-8ff6-114dd19aeeef\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " Apr 16 17:43:00.058708 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.058667 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-service-ca\") pod \"310fc5b9-2115-425a-8ff6-114dd19aeeef\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " Apr 16 17:43:00.058752 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.058715 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxd8p\" (UniqueName: \"kubernetes.io/projected/310fc5b9-2115-425a-8ff6-114dd19aeeef-kube-api-access-sxd8p\") pod \"310fc5b9-2115-425a-8ff6-114dd19aeeef\" (UID: \"310fc5b9-2115-425a-8ff6-114dd19aeeef\") " Apr 16 17:43:00.058829 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.058794 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "310fc5b9-2115-425a-8ff6-114dd19aeeef" (UID: "310fc5b9-2115-425a-8ff6-114dd19aeeef"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:43:00.059021 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.059002 2560 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-oauth-serving-cert\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:43:00.059104 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.059024 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "310fc5b9-2115-425a-8ff6-114dd19aeeef" (UID: "310fc5b9-2115-425a-8ff6-114dd19aeeef"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:43:00.059104 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.059077 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-service-ca" (OuterVolumeSpecName: "service-ca") pod "310fc5b9-2115-425a-8ff6-114dd19aeeef" (UID: "310fc5b9-2115-425a-8ff6-114dd19aeeef"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:43:00.059179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.059095 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-config" (OuterVolumeSpecName: "console-config") pod "310fc5b9-2115-425a-8ff6-114dd19aeeef" (UID: "310fc5b9-2115-425a-8ff6-114dd19aeeef"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:43:00.060684 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.060662 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "310fc5b9-2115-425a-8ff6-114dd19aeeef" (UID: "310fc5b9-2115-425a-8ff6-114dd19aeeef"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:43:00.060890 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.060868 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "310fc5b9-2115-425a-8ff6-114dd19aeeef" (UID: "310fc5b9-2115-425a-8ff6-114dd19aeeef"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:43:00.060890 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.060877 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310fc5b9-2115-425a-8ff6-114dd19aeeef-kube-api-access-sxd8p" (OuterVolumeSpecName: "kube-api-access-sxd8p") pod "310fc5b9-2115-425a-8ff6-114dd19aeeef" (UID: "310fc5b9-2115-425a-8ff6-114dd19aeeef"). InnerVolumeSpecName "kube-api-access-sxd8p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:43:00.159378 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.159353 2560 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sxd8p\" (UniqueName: \"kubernetes.io/projected/310fc5b9-2115-425a-8ff6-114dd19aeeef-kube-api-access-sxd8p\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:43:00.159378 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.159376 2560 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-serving-cert\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:43:00.159530 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.159386 2560 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-trusted-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:43:00.159530 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.159395 2560 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-oauth-config\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:43:00.159530 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.159404 2560 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-console-config\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:43:00.159530 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.159414 2560 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/310fc5b9-2115-425a-8ff6-114dd19aeeef-service-ca\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:43:00.728562 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.728536 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-668c9bd5c9-6gcrb_310fc5b9-2115-425a-8ff6-114dd19aeeef/console/0.log" Apr 16 17:43:00.728760 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.728574 2560 generic.go:358] "Generic (PLEG): container finished" podID="310fc5b9-2115-425a-8ff6-114dd19aeeef" containerID="48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4" exitCode=2 Apr 16 17:43:00.728760 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.728615 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-668c9bd5c9-6gcrb" event={"ID":"310fc5b9-2115-425a-8ff6-114dd19aeeef","Type":"ContainerDied","Data":"48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4"} Apr 16 17:43:00.728760 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.728637 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-668c9bd5c9-6gcrb" event={"ID":"310fc5b9-2115-425a-8ff6-114dd19aeeef","Type":"ContainerDied","Data":"41fede8902c4d1d64f6b6ae29feef8f01b2f819f4ac2ee7e598e4c8f1180c7bc"} Apr 16 17:43:00.728760 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.728652 2560 scope.go:117] "RemoveContainer" containerID="48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4" Apr 16 17:43:00.728760 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.728689 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-668c9bd5c9-6gcrb" Apr 16 17:43:00.736252 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.736233 2560 scope.go:117] "RemoveContainer" containerID="48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4" Apr 16 17:43:00.736515 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:43:00.736497 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4\": container with ID starting with 48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4 not found: ID does not exist" containerID="48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4" Apr 16 17:43:00.736558 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.736523 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4"} err="failed to get container status \"48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4\": rpc error: code = NotFound desc = could not find container \"48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4\": container with ID starting with 48646cd4fff0e3032529353225c63274cf8b9bd2a068153c7a38db6b784eaae4 not found: ID does not exist" Apr 16 17:43:00.756449 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.756420 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-668c9bd5c9-6gcrb"] Apr 16 17:43:00.765595 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:00.765574 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-668c9bd5c9-6gcrb"] Apr 16 17:43:02.161298 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:43:02.161263 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310fc5b9-2115-425a-8ff6-114dd19aeeef" path="/var/lib/kubelet/pods/310fc5b9-2115-425a-8ff6-114dd19aeeef/volumes" Apr 16 17:44:08.532457 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.532427 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-766bd475d4-7tt9z"] Apr 16 17:44:08.532969 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.532726 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="310fc5b9-2115-425a-8ff6-114dd19aeeef" containerName="console" Apr 16 17:44:08.532969 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.532739 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="310fc5b9-2115-425a-8ff6-114dd19aeeef" containerName="console" Apr 16 17:44:08.532969 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.532790 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="310fc5b9-2115-425a-8ff6-114dd19aeeef" containerName="console" Apr 16 17:44:08.535705 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.535688 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.548330 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.548306 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-766bd475d4-7tt9z"] Apr 16 17:44:08.687411 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.687360 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-oauth-config\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.687411 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.687413 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-service-ca\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.687688 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.687491 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-trusted-ca-bundle\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.687688 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.687557 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-config\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.687688 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.687606 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-serving-cert\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.687688 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.687642 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-oauth-serving-cert\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.687688 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.687676 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk29z\" (UniqueName: \"kubernetes.io/projected/35d88534-6ec2-420e-91bc-e41c8e9a2909-kube-api-access-gk29z\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.788117 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.788043 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-trusted-ca-bundle\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.788117 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.788078 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-config\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.788117 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.788102 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-serving-cert\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.788359 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.788123 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-oauth-serving-cert\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.788359 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.788139 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gk29z\" (UniqueName: \"kubernetes.io/projected/35d88534-6ec2-420e-91bc-e41c8e9a2909-kube-api-access-gk29z\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.788359 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.788163 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-oauth-config\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.788359 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.788180 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-service-ca\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.788968 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.788940 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-config\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.789073 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.788944 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-oauth-serving-cert\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.789073 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.788952 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-service-ca\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.789073 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.789064 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-trusted-ca-bundle\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.790622 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.790594 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-oauth-config\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.790775 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.790757 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-serving-cert\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.798582 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.798558 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk29z\" (UniqueName: \"kubernetes.io/projected/35d88534-6ec2-420e-91bc-e41c8e9a2909-kube-api-access-gk29z\") pod \"console-766bd475d4-7tt9z\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.844354 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.844320 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:08.965034 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:08.964881 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-766bd475d4-7tt9z"] Apr 16 17:44:08.967745 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:44:08.967716 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d88534_6ec2_420e_91bc_e41c8e9a2909.slice/crio-18cb055e9987fb7f109e249ac7c59440368ab30b7276ebe4d5cfa3f1efb64cb1 WatchSource:0}: Error finding container 18cb055e9987fb7f109e249ac7c59440368ab30b7276ebe4d5cfa3f1efb64cb1: Status 404 returned error can't find the container with id 18cb055e9987fb7f109e249ac7c59440368ab30b7276ebe4d5cfa3f1efb64cb1 Apr 16 17:44:09.916657 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:09.916621 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-766bd475d4-7tt9z" event={"ID":"35d88534-6ec2-420e-91bc-e41c8e9a2909","Type":"ContainerStarted","Data":"e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc"} Apr 16 17:44:09.916657 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:09.916658 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-766bd475d4-7tt9z" event={"ID":"35d88534-6ec2-420e-91bc-e41c8e9a2909","Type":"ContainerStarted","Data":"18cb055e9987fb7f109e249ac7c59440368ab30b7276ebe4d5cfa3f1efb64cb1"} Apr 16 17:44:09.942071 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:09.942015 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-766bd475d4-7tt9z" podStartSLOduration=1.941998332 podStartE2EDuration="1.941998332s" podCreationTimestamp="2026-04-16 17:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:44:09.940231293 +0000 UTC m=+218.303440177" watchObservedRunningTime="2026-04-16 17:44:09.941998332 +0000 UTC m=+218.305207207" Apr 16 17:44:18.844906 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:18.844866 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:18.845305 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:18.844922 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:18.849684 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:18.849652 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:18.943328 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:18.943303 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:44:19.000807 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:19.000776 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7fff4cf859-4cct7"] Apr 16 17:44:36.020187 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.020150 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt"] Apr 16 17:44:36.023344 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.023321 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" Apr 16 17:44:36.025502 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.025482 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 16 17:44:36.025615 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.025550 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-zjwxk\"" Apr 16 17:44:36.026081 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.026056 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 16 17:44:36.026081 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.026071 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 16 17:44:36.026222 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.026121 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 16 17:44:36.032481 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.032460 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt"] Apr 16 17:44:36.084883 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.084857 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65"] Apr 16 17:44:36.087753 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.087737 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.089711 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.089694 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 16 17:44:36.095605 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.095577 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77m7n\" (UniqueName: \"kubernetes.io/projected/17900f8e-b3db-4a5e-875e-16839317baeb-kube-api-access-77m7n\") pod \"managed-serviceaccount-addon-agent-585c79d59c-89rjt\" (UID: \"17900f8e-b3db-4a5e-875e-16839317baeb\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" Apr 16 17:44:36.095692 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.095654 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/17900f8e-b3db-4a5e-875e-16839317baeb-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-585c79d59c-89rjt\" (UID: \"17900f8e-b3db-4a5e-875e-16839317baeb\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" Apr 16 17:44:36.100114 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.100093 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65"] Apr 16 17:44:36.196514 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.196476 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/17900f8e-b3db-4a5e-875e-16839317baeb-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-585c79d59c-89rjt\" (UID: \"17900f8e-b3db-4a5e-875e-16839317baeb\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" Apr 16 17:44:36.196692 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.196527 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86fn2\" (UniqueName: \"kubernetes.io/projected/402948b8-e81b-48ad-afa6-ecc4c52161a5-kube-api-access-86fn2\") pod \"klusterlet-addon-workmgr-85fd88fbc6-tvx65\" (UID: \"402948b8-e81b-48ad-afa6-ecc4c52161a5\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.196692 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.196601 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/402948b8-e81b-48ad-afa6-ecc4c52161a5-klusterlet-config\") pod \"klusterlet-addon-workmgr-85fd88fbc6-tvx65\" (UID: \"402948b8-e81b-48ad-afa6-ecc4c52161a5\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.196692 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.196651 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/402948b8-e81b-48ad-afa6-ecc4c52161a5-tmp\") pod \"klusterlet-addon-workmgr-85fd88fbc6-tvx65\" (UID: \"402948b8-e81b-48ad-afa6-ecc4c52161a5\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.196692 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.196684 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77m7n\" (UniqueName: \"kubernetes.io/projected/17900f8e-b3db-4a5e-875e-16839317baeb-kube-api-access-77m7n\") pod \"managed-serviceaccount-addon-agent-585c79d59c-89rjt\" (UID: \"17900f8e-b3db-4a5e-875e-16839317baeb\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" Apr 16 17:44:36.199029 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.199002 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/17900f8e-b3db-4a5e-875e-16839317baeb-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-585c79d59c-89rjt\" (UID: \"17900f8e-b3db-4a5e-875e-16839317baeb\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" Apr 16 17:44:36.206846 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.206811 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77m7n\" (UniqueName: \"kubernetes.io/projected/17900f8e-b3db-4a5e-875e-16839317baeb-kube-api-access-77m7n\") pod \"managed-serviceaccount-addon-agent-585c79d59c-89rjt\" (UID: \"17900f8e-b3db-4a5e-875e-16839317baeb\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" Apr 16 17:44:36.297320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.297286 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/402948b8-e81b-48ad-afa6-ecc4c52161a5-klusterlet-config\") pod \"klusterlet-addon-workmgr-85fd88fbc6-tvx65\" (UID: \"402948b8-e81b-48ad-afa6-ecc4c52161a5\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.297486 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.297343 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/402948b8-e81b-48ad-afa6-ecc4c52161a5-tmp\") pod \"klusterlet-addon-workmgr-85fd88fbc6-tvx65\" (UID: \"402948b8-e81b-48ad-afa6-ecc4c52161a5\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.297486 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.297384 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-86fn2\" (UniqueName: \"kubernetes.io/projected/402948b8-e81b-48ad-afa6-ecc4c52161a5-kube-api-access-86fn2\") pod \"klusterlet-addon-workmgr-85fd88fbc6-tvx65\" (UID: \"402948b8-e81b-48ad-afa6-ecc4c52161a5\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.297750 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.297731 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/402948b8-e81b-48ad-afa6-ecc4c52161a5-tmp\") pod \"klusterlet-addon-workmgr-85fd88fbc6-tvx65\" (UID: \"402948b8-e81b-48ad-afa6-ecc4c52161a5\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.299696 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.299677 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/402948b8-e81b-48ad-afa6-ecc4c52161a5-klusterlet-config\") pod \"klusterlet-addon-workmgr-85fd88fbc6-tvx65\" (UID: \"402948b8-e81b-48ad-afa6-ecc4c52161a5\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.306888 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.306866 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-86fn2\" (UniqueName: \"kubernetes.io/projected/402948b8-e81b-48ad-afa6-ecc4c52161a5-kube-api-access-86fn2\") pod \"klusterlet-addon-workmgr-85fd88fbc6-tvx65\" (UID: \"402948b8-e81b-48ad-afa6-ecc4c52161a5\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.344735 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.344706 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" Apr 16 17:44:36.396451 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.396424 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:36.477185 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.477154 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt"] Apr 16 17:44:36.481271 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:44:36.481242 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17900f8e_b3db_4a5e_875e_16839317baeb.slice/crio-05a49a1630f5b58bd37019d7f9a2665c8c17252037ca924853286f9ab524faaa WatchSource:0}: Error finding container 05a49a1630f5b58bd37019d7f9a2665c8c17252037ca924853286f9ab524faaa: Status 404 returned error can't find the container with id 05a49a1630f5b58bd37019d7f9a2665c8c17252037ca924853286f9ab524faaa Apr 16 17:44:36.524214 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.524110 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65"] Apr 16 17:44:36.526690 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:44:36.526661 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod402948b8_e81b_48ad_afa6_ecc4c52161a5.slice/crio-e76e91fa1c4ec61002ca33e5787cd15d90c63aca1ca58104ae3542da93e903d4 WatchSource:0}: Error finding container e76e91fa1c4ec61002ca33e5787cd15d90c63aca1ca58104ae3542da93e903d4: Status 404 returned error can't find the container with id e76e91fa1c4ec61002ca33e5787cd15d90c63aca1ca58104ae3542da93e903d4 Apr 16 17:44:36.992429 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.992390 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" event={"ID":"17900f8e-b3db-4a5e-875e-16839317baeb","Type":"ContainerStarted","Data":"05a49a1630f5b58bd37019d7f9a2665c8c17252037ca924853286f9ab524faaa"} Apr 16 17:44:36.993312 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:36.993284 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" event={"ID":"402948b8-e81b-48ad-afa6-ecc4c52161a5","Type":"ContainerStarted","Data":"e76e91fa1c4ec61002ca33e5787cd15d90c63aca1ca58104ae3542da93e903d4"} Apr 16 17:44:41.009588 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:41.009542 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" event={"ID":"402948b8-e81b-48ad-afa6-ecc4c52161a5","Type":"ContainerStarted","Data":"84585cace9a2bca9b7f54419dc1ed148d45093a6e93ebbd242961b8bfec64e06"} Apr 16 17:44:41.010095 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:41.010043 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:41.011593 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:41.011568 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" Apr 16 17:44:41.029963 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:41.029593 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-85fd88fbc6-tvx65" podStartSLOduration=1.088700483 podStartE2EDuration="5.029573641s" podCreationTimestamp="2026-04-16 17:44:36 +0000 UTC" firstStartedPulling="2026-04-16 17:44:36.52849954 +0000 UTC m=+244.891708391" lastFinishedPulling="2026-04-16 17:44:40.469372683 +0000 UTC m=+248.832581549" observedRunningTime="2026-04-16 17:44:41.028562157 +0000 UTC m=+249.391771031" watchObservedRunningTime="2026-04-16 17:44:41.029573641 +0000 UTC m=+249.392782515" Apr 16 17:44:42.013354 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:42.013319 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" event={"ID":"17900f8e-b3db-4a5e-875e-16839317baeb","Type":"ContainerStarted","Data":"519c4097f3a366648637b0c0823073f2d1729967ff9b00d04629650978144731"} Apr 16 17:44:44.021119 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.021083 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-7fff4cf859-4cct7" podUID="bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" containerName="console" containerID="cri-o://47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099" gracePeriod=15 Apr 16 17:44:44.258139 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.258117 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fff4cf859-4cct7_bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8/console/0.log" Apr 16 17:44:44.258259 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.258177 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:44:44.279306 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.279238 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-585c79d59c-89rjt" podStartSLOduration=4.457599259 podStartE2EDuration="9.27921967s" podCreationTimestamp="2026-04-16 17:44:35 +0000 UTC" firstStartedPulling="2026-04-16 17:44:36.483524706 +0000 UTC m=+244.846733557" lastFinishedPulling="2026-04-16 17:44:41.305145113 +0000 UTC m=+249.668353968" observedRunningTime="2026-04-16 17:44:42.033217974 +0000 UTC m=+250.396426847" watchObservedRunningTime="2026-04-16 17:44:44.27921967 +0000 UTC m=+252.642428543" Apr 16 17:44:44.369039 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.368996 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-config\") pod \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " Apr 16 17:44:44.369039 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.369036 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-serving-cert\") pod \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " Apr 16 17:44:44.369295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.369075 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz8kj\" (UniqueName: \"kubernetes.io/projected/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-kube-api-access-tz8kj\") pod \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " Apr 16 17:44:44.369295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.369105 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-oauth-serving-cert\") pod \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " Apr 16 17:44:44.369295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.369120 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-trusted-ca-bundle\") pod \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " Apr 16 17:44:44.369295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.369145 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-service-ca\") pod \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " Apr 16 17:44:44.369295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.369193 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-oauth-config\") pod \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\" (UID: \"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8\") " Apr 16 17:44:44.369556 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.369519 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-config" (OuterVolumeSpecName: "console-config") pod "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" (UID: "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:44:44.369622 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.369538 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" (UID: "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:44:44.369622 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.369568 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-service-ca" (OuterVolumeSpecName: "service-ca") pod "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" (UID: "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:44:44.369715 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.369620 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" (UID: "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:44:44.371408 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.371379 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-kube-api-access-tz8kj" (OuterVolumeSpecName: "kube-api-access-tz8kj") pod "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" (UID: "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8"). InnerVolumeSpecName "kube-api-access-tz8kj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:44:44.371517 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.371456 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" (UID: "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:44:44.371517 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.371496 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" (UID: "bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:44:44.470747 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.470697 2560 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-oauth-config\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:44:44.470747 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.470739 2560 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-config\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:44:44.470747 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.470750 2560 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-console-serving-cert\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:44:44.470747 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.470759 2560 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tz8kj\" (UniqueName: \"kubernetes.io/projected/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-kube-api-access-tz8kj\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:44:44.470747 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.470769 2560 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-oauth-serving-cert\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:44:44.471082 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.470778 2560 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-trusted-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:44:44.471082 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:44.470787 2560 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8-service-ca\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:44:45.022346 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:45.022319 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fff4cf859-4cct7_bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8/console/0.log" Apr 16 17:44:45.022828 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:45.022360 2560 generic.go:358] "Generic (PLEG): container finished" podID="bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" containerID="47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099" exitCode=2 Apr 16 17:44:45.022828 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:45.022448 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fff4cf859-4cct7" event={"ID":"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8","Type":"ContainerDied","Data":"47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099"} Apr 16 17:44:45.022828 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:45.022468 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fff4cf859-4cct7" Apr 16 17:44:45.022828 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:45.022486 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fff4cf859-4cct7" event={"ID":"bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8","Type":"ContainerDied","Data":"496dc76d369b12eedf2249d4a1c836c016308c9e688450b318326208dd3c2f63"} Apr 16 17:44:45.022828 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:45.022501 2560 scope.go:117] "RemoveContainer" containerID="47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099" Apr 16 17:44:45.030708 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:45.030686 2560 scope.go:117] "RemoveContainer" containerID="47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099" Apr 16 17:44:45.031030 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:44:45.031005 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099\": container with ID starting with 47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099 not found: ID does not exist" containerID="47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099" Apr 16 17:44:45.031117 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:45.031053 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099"} err="failed to get container status \"47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099\": rpc error: code = NotFound desc = could not find container \"47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099\": container with ID starting with 47c48b78b6509a08f67a265b29fe933b12e16304775664810d85a94a4c0cd099 not found: ID does not exist" Apr 16 17:44:45.051290 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:45.051261 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7fff4cf859-4cct7"] Apr 16 17:44:45.056791 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:45.056770 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7fff4cf859-4cct7"] Apr 16 17:44:46.161893 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:44:46.161858 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" path="/var/lib/kubelet/pods/bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8/volumes" Apr 16 17:45:31.564960 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.564918 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-9rhfj"] Apr 16 17:45:31.565510 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.565480 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" containerName="console" Apr 16 17:45:31.565510 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.565503 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" containerName="console" Apr 16 17:45:31.565692 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.565624 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="bb1aeecb-16cd-40b4-a5ad-55b2bd24edf8" containerName="console" Apr 16 17:45:31.568851 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.568813 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:31.571253 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.571227 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 16 17:45:31.571253 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.571239 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 16 17:45:31.571424 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.571258 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 16 17:45:31.571424 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.571293 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 16 17:45:31.571424 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.571239 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-certs\"" Apr 16 17:45:31.571730 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.571716 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-spg6g\"" Apr 16 17:45:31.578918 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.578897 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-9rhfj"] Apr 16 17:45:31.657594 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.657563 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:31.657759 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.657607 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/94593b18-b081-42c9-abd4-068257869c9b-cabundle0\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:31.657759 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.657683 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qldtw\" (UniqueName: \"kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-kube-api-access-qldtw\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:31.758966 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.758931 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/94593b18-b081-42c9-abd4-068257869c9b-cabundle0\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:31.759182 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.758990 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qldtw\" (UniqueName: \"kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-kube-api-access-qldtw\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:31.759182 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.759119 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:31.759285 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:31.759250 2560 projected.go:264] Couldn't get secret openshift-keda/keda-operator-certs: secret "keda-operator-certs" not found Apr 16 17:45:31.759285 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:31.759270 2560 secret.go:281] references non-existent secret key: ca.crt Apr 16 17:45:31.759285 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:31.759278 2560 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 17:45:31.759380 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:31.759292 2560 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-9rhfj: [secret "keda-operator-certs" not found, references non-existent secret key: ca.crt] Apr 16 17:45:31.759380 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:31.759364 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates podName:94593b18-b081-42c9-abd4-068257869c9b nodeName:}" failed. No retries permitted until 2026-04-16 17:45:32.259347425 +0000 UTC m=+300.622556290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates") pod "keda-operator-ffbb595cb-9rhfj" (UID: "94593b18-b081-42c9-abd4-068257869c9b") : [secret "keda-operator-certs" not found, references non-existent secret key: ca.crt] Apr 16 17:45:31.759582 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.759563 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/94593b18-b081-42c9-abd4-068257869c9b-cabundle0\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:31.769660 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:31.769629 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qldtw\" (UniqueName: \"kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-kube-api-access-qldtw\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:32.061120 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.061094 2560 kubelet.go:1628] "Image garbage collection succeeded" Apr 16 17:45:32.226987 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.226845 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-w2mtj"] Apr 16 17:45:32.233117 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.231441 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-w2mtj" Apr 16 17:45:32.233604 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.233583 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 16 17:45:32.240765 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.240741 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-w2mtj"] Apr 16 17:45:32.263644 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.263609 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:32.263816 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.263655 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-788wm\" (UniqueName: \"kubernetes.io/projected/eb2d67d1-753d-49f2-a000-2fb5ab35634a-kube-api-access-788wm\") pod \"keda-admission-cf49989db-w2mtj\" (UID: \"eb2d67d1-753d-49f2-a000-2fb5ab35634a\") " pod="openshift-keda/keda-admission-cf49989db-w2mtj" Apr 16 17:45:32.263816 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.263679 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/eb2d67d1-753d-49f2-a000-2fb5ab35634a-certificates\") pod \"keda-admission-cf49989db-w2mtj\" (UID: \"eb2d67d1-753d-49f2-a000-2fb5ab35634a\") " pod="openshift-keda/keda-admission-cf49989db-w2mtj" Apr 16 17:45:32.263816 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:32.263767 2560 secret.go:281] references non-existent secret key: ca.crt Apr 16 17:45:32.263816 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:32.263782 2560 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 17:45:32.263816 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:32.263790 2560 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-9rhfj: references non-existent secret key: ca.crt Apr 16 17:45:32.264094 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:32.263855 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates podName:94593b18-b081-42c9-abd4-068257869c9b nodeName:}" failed. No retries permitted until 2026-04-16 17:45:33.263823438 +0000 UTC m=+301.627032299 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates") pod "keda-operator-ffbb595cb-9rhfj" (UID: "94593b18-b081-42c9-abd4-068257869c9b") : references non-existent secret key: ca.crt Apr 16 17:45:32.364242 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.364144 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-788wm\" (UniqueName: \"kubernetes.io/projected/eb2d67d1-753d-49f2-a000-2fb5ab35634a-kube-api-access-788wm\") pod \"keda-admission-cf49989db-w2mtj\" (UID: \"eb2d67d1-753d-49f2-a000-2fb5ab35634a\") " pod="openshift-keda/keda-admission-cf49989db-w2mtj" Apr 16 17:45:32.364242 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.364182 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/eb2d67d1-753d-49f2-a000-2fb5ab35634a-certificates\") pod \"keda-admission-cf49989db-w2mtj\" (UID: \"eb2d67d1-753d-49f2-a000-2fb5ab35634a\") " pod="openshift-keda/keda-admission-cf49989db-w2mtj" Apr 16 17:45:32.366574 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.366551 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/eb2d67d1-753d-49f2-a000-2fb5ab35634a-certificates\") pod \"keda-admission-cf49989db-w2mtj\" (UID: \"eb2d67d1-753d-49f2-a000-2fb5ab35634a\") " pod="openshift-keda/keda-admission-cf49989db-w2mtj" Apr 16 17:45:32.378155 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.378126 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-788wm\" (UniqueName: \"kubernetes.io/projected/eb2d67d1-753d-49f2-a000-2fb5ab35634a-kube-api-access-788wm\") pod \"keda-admission-cf49989db-w2mtj\" (UID: \"eb2d67d1-753d-49f2-a000-2fb5ab35634a\") " pod="openshift-keda/keda-admission-cf49989db-w2mtj" Apr 16 17:45:32.544124 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.544091 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-spg6g\"" Apr 16 17:45:32.552291 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.552268 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-w2mtj" Apr 16 17:45:32.674975 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:32.674942 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-w2mtj"] Apr 16 17:45:32.678686 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:45:32.678650 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb2d67d1_753d_49f2_a000_2fb5ab35634a.slice/crio-6e5e2a87ef94335f420b2d5a542453ca3e45664f532e27ff6efe67d1634fc17d WatchSource:0}: Error finding container 6e5e2a87ef94335f420b2d5a542453ca3e45664f532e27ff6efe67d1634fc17d: Status 404 returned error can't find the container with id 6e5e2a87ef94335f420b2d5a542453ca3e45664f532e27ff6efe67d1634fc17d Apr 16 17:45:33.155458 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:33.155404 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-w2mtj" event={"ID":"eb2d67d1-753d-49f2-a000-2fb5ab35634a","Type":"ContainerStarted","Data":"6e5e2a87ef94335f420b2d5a542453ca3e45664f532e27ff6efe67d1634fc17d"} Apr 16 17:45:33.273082 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:33.273036 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:33.273861 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:33.273572 2560 secret.go:281] references non-existent secret key: ca.crt Apr 16 17:45:33.273861 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:33.273599 2560 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 17:45:33.273861 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:33.273614 2560 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-9rhfj: references non-existent secret key: ca.crt Apr 16 17:45:33.273861 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:33.273688 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates podName:94593b18-b081-42c9-abd4-068257869c9b nodeName:}" failed. No retries permitted until 2026-04-16 17:45:35.273668719 +0000 UTC m=+303.636877588 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates") pod "keda-operator-ffbb595cb-9rhfj" (UID: "94593b18-b081-42c9-abd4-068257869c9b") : references non-existent secret key: ca.crt Apr 16 17:45:35.162221 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:35.162185 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-w2mtj" event={"ID":"eb2d67d1-753d-49f2-a000-2fb5ab35634a","Type":"ContainerStarted","Data":"3dcc95766eeac0c67b1f8b807c013e79ef0211ae5528f15022390e53e67a867f"} Apr 16 17:45:35.162621 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:35.162342 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-w2mtj" Apr 16 17:45:35.189533 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:35.189467 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-w2mtj" podStartSLOduration=1.095174524 podStartE2EDuration="3.189449233s" podCreationTimestamp="2026-04-16 17:45:32 +0000 UTC" firstStartedPulling="2026-04-16 17:45:32.680489244 +0000 UTC m=+301.043698094" lastFinishedPulling="2026-04-16 17:45:34.774763949 +0000 UTC m=+303.137972803" observedRunningTime="2026-04-16 17:45:35.187604341 +0000 UTC m=+303.550813224" watchObservedRunningTime="2026-04-16 17:45:35.189449233 +0000 UTC m=+303.552658108" Apr 16 17:45:35.291221 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:35.291183 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:35.291378 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:35.291334 2560 secret.go:281] references non-existent secret key: ca.crt Apr 16 17:45:35.291378 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:35.291352 2560 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 17:45:35.291378 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:35.291363 2560 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-9rhfj: references non-existent secret key: ca.crt Apr 16 17:45:35.291496 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:45:35.291427 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates podName:94593b18-b081-42c9-abd4-068257869c9b nodeName:}" failed. No retries permitted until 2026-04-16 17:45:39.291411502 +0000 UTC m=+307.654620352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates") pod "keda-operator-ffbb595cb-9rhfj" (UID: "94593b18-b081-42c9-abd4-068257869c9b") : references non-existent secret key: ca.crt Apr 16 17:45:39.327331 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:39.327284 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:39.329733 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:39.329707 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/94593b18-b081-42c9-abd4-068257869c9b-certificates\") pod \"keda-operator-ffbb595cb-9rhfj\" (UID: \"94593b18-b081-42c9-abd4-068257869c9b\") " pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:39.379333 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:39.379305 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:39.500771 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:39.500739 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-9rhfj"] Apr 16 17:45:39.504636 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:45:39.504608 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94593b18_b081_42c9_abd4_068257869c9b.slice/crio-e9137595bc632d46e5ef2b94f34e0338bbb99a433924a7040a3b3bc2792ba850 WatchSource:0}: Error finding container e9137595bc632d46e5ef2b94f34e0338bbb99a433924a7040a3b3bc2792ba850: Status 404 returned error can't find the container with id e9137595bc632d46e5ef2b94f34e0338bbb99a433924a7040a3b3bc2792ba850 Apr 16 17:45:39.505901 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:39.505882 2560 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 17:45:40.178268 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:40.178233 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" event={"ID":"94593b18-b081-42c9-abd4-068257869c9b","Type":"ContainerStarted","Data":"e9137595bc632d46e5ef2b94f34e0338bbb99a433924a7040a3b3bc2792ba850"} Apr 16 17:45:43.188653 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:43.188613 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" event={"ID":"94593b18-b081-42c9-abd4-068257869c9b","Type":"ContainerStarted","Data":"e63f353676e5df7d773fd3537d3dbf1b0f3168c8e1d0061b1e4b4e6a17491cd8"} Apr 16 17:45:43.189087 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:43.188771 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:45:43.208054 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:43.208005 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" podStartSLOduration=8.6665672 podStartE2EDuration="12.207991824s" podCreationTimestamp="2026-04-16 17:45:31 +0000 UTC" firstStartedPulling="2026-04-16 17:45:39.506079769 +0000 UTC m=+307.869288626" lastFinishedPulling="2026-04-16 17:45:43.04750439 +0000 UTC m=+311.410713250" observedRunningTime="2026-04-16 17:45:43.206528166 +0000 UTC m=+311.569737038" watchObservedRunningTime="2026-04-16 17:45:43.207991824 +0000 UTC m=+311.571200697" Apr 16 17:45:56.166973 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:45:56.166940 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-w2mtj" Apr 16 17:46:04.193614 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:04.193580 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-9rhfj" Apr 16 17:46:41.495817 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.495777 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-78fjq"] Apr 16 17:46:41.499282 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.499260 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:46:41.501426 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.501403 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 16 17:46:41.501500 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.501435 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 16 17:46:41.501905 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.501891 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 16 17:46:41.502097 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.502082 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-9tv47\"" Apr 16 17:46:41.513404 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.513386 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-78fjq"] Apr 16 17:46:41.639180 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.639149 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-cert\") pod \"kserve-controller-manager-7f8f4564d-78fjq\" (UID: \"fccd021f-3f44-4d0a-9e6c-851f64dc17b6\") " pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:46:41.639343 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.639189 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjxj7\" (UniqueName: \"kubernetes.io/projected/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-kube-api-access-zjxj7\") pod \"kserve-controller-manager-7f8f4564d-78fjq\" (UID: \"fccd021f-3f44-4d0a-9e6c-851f64dc17b6\") " pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:46:41.740489 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.740459 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-cert\") pod \"kserve-controller-manager-7f8f4564d-78fjq\" (UID: \"fccd021f-3f44-4d0a-9e6c-851f64dc17b6\") " pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:46:41.740631 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.740496 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zjxj7\" (UniqueName: \"kubernetes.io/projected/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-kube-api-access-zjxj7\") pod \"kserve-controller-manager-7f8f4564d-78fjq\" (UID: \"fccd021f-3f44-4d0a-9e6c-851f64dc17b6\") " pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:46:41.742815 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.742794 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-cert\") pod \"kserve-controller-manager-7f8f4564d-78fjq\" (UID: \"fccd021f-3f44-4d0a-9e6c-851f64dc17b6\") " pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:46:41.750782 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.750722 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjxj7\" (UniqueName: \"kubernetes.io/projected/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-kube-api-access-zjxj7\") pod \"kserve-controller-manager-7f8f4564d-78fjq\" (UID: \"fccd021f-3f44-4d0a-9e6c-851f64dc17b6\") " pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:46:41.809909 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.809881 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:46:41.937144 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:41.937107 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-78fjq"] Apr 16 17:46:41.943771 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:46:41.941403 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfccd021f_3f44_4d0a_9e6c_851f64dc17b6.slice/crio-d94fead6c7c7e7d027eb406229936aebd352e839e2222751b0a6412e5b71eeb7 WatchSource:0}: Error finding container d94fead6c7c7e7d027eb406229936aebd352e839e2222751b0a6412e5b71eeb7: Status 404 returned error can't find the container with id d94fead6c7c7e7d027eb406229936aebd352e839e2222751b0a6412e5b71eeb7 Apr 16 17:46:42.357063 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:42.357029 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" event={"ID":"fccd021f-3f44-4d0a-9e6c-851f64dc17b6","Type":"ContainerStarted","Data":"d94fead6c7c7e7d027eb406229936aebd352e839e2222751b0a6412e5b71eeb7"} Apr 16 17:46:45.366563 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:45.366529 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" event={"ID":"fccd021f-3f44-4d0a-9e6c-851f64dc17b6","Type":"ContainerStarted","Data":"f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0"} Apr 16 17:46:45.366991 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:45.366584 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:46:45.386200 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:46:45.386139 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" podStartSLOduration=1.953450234 podStartE2EDuration="4.386120675s" podCreationTimestamp="2026-04-16 17:46:41 +0000 UTC" firstStartedPulling="2026-04-16 17:46:41.943785052 +0000 UTC m=+370.306993906" lastFinishedPulling="2026-04-16 17:46:44.376455496 +0000 UTC m=+372.739664347" observedRunningTime="2026-04-16 17:46:45.3841576 +0000 UTC m=+373.747366484" watchObservedRunningTime="2026-04-16 17:46:45.386120675 +0000 UTC m=+373.749329550" Apr 16 17:47:16.375220 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:16.375183 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:47:18.094335 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.094300 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-78fjq"] Apr 16 17:47:18.094739 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.094524 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" podUID="fccd021f-3f44-4d0a-9e6c-851f64dc17b6" containerName="manager" containerID="cri-o://f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0" gracePeriod=10 Apr 16 17:47:18.124728 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.124698 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-mf4p9"] Apr 16 17:47:18.126661 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.126643 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" Apr 16 17:47:18.138352 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.138322 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-mf4p9"] Apr 16 17:47:18.233901 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.233871 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5zcw\" (UniqueName: \"kubernetes.io/projected/065e4f24-ec52-416c-b2bc-0b390a2cca88-kube-api-access-b5zcw\") pod \"kserve-controller-manager-7f8f4564d-mf4p9\" (UID: \"065e4f24-ec52-416c-b2bc-0b390a2cca88\") " pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" Apr 16 17:47:18.234054 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.233920 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/065e4f24-ec52-416c-b2bc-0b390a2cca88-cert\") pod \"kserve-controller-manager-7f8f4564d-mf4p9\" (UID: \"065e4f24-ec52-416c-b2bc-0b390a2cca88\") " pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" Apr 16 17:47:18.328007 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.327984 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:47:18.335003 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.334981 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b5zcw\" (UniqueName: \"kubernetes.io/projected/065e4f24-ec52-416c-b2bc-0b390a2cca88-kube-api-access-b5zcw\") pod \"kserve-controller-manager-7f8f4564d-mf4p9\" (UID: \"065e4f24-ec52-416c-b2bc-0b390a2cca88\") " pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" Apr 16 17:47:18.335094 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.335016 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/065e4f24-ec52-416c-b2bc-0b390a2cca88-cert\") pod \"kserve-controller-manager-7f8f4564d-mf4p9\" (UID: \"065e4f24-ec52-416c-b2bc-0b390a2cca88\") " pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" Apr 16 17:47:18.337264 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.337243 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/065e4f24-ec52-416c-b2bc-0b390a2cca88-cert\") pod \"kserve-controller-manager-7f8f4564d-mf4p9\" (UID: \"065e4f24-ec52-416c-b2bc-0b390a2cca88\") " pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" Apr 16 17:47:18.344320 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.344301 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5zcw\" (UniqueName: \"kubernetes.io/projected/065e4f24-ec52-416c-b2bc-0b390a2cca88-kube-api-access-b5zcw\") pod \"kserve-controller-manager-7f8f4564d-mf4p9\" (UID: \"065e4f24-ec52-416c-b2bc-0b390a2cca88\") " pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" Apr 16 17:47:18.436113 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.436032 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-cert\") pod \"fccd021f-3f44-4d0a-9e6c-851f64dc17b6\" (UID: \"fccd021f-3f44-4d0a-9e6c-851f64dc17b6\") " Apr 16 17:47:18.436273 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.436147 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjxj7\" (UniqueName: \"kubernetes.io/projected/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-kube-api-access-zjxj7\") pod \"fccd021f-3f44-4d0a-9e6c-851f64dc17b6\" (UID: \"fccd021f-3f44-4d0a-9e6c-851f64dc17b6\") " Apr 16 17:47:18.438144 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.438118 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-cert" (OuterVolumeSpecName: "cert") pod "fccd021f-3f44-4d0a-9e6c-851f64dc17b6" (UID: "fccd021f-3f44-4d0a-9e6c-851f64dc17b6"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:47:18.438219 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.438159 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-kube-api-access-zjxj7" (OuterVolumeSpecName: "kube-api-access-zjxj7") pod "fccd021f-3f44-4d0a-9e6c-851f64dc17b6" (UID: "fccd021f-3f44-4d0a-9e6c-851f64dc17b6"). InnerVolumeSpecName "kube-api-access-zjxj7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:47:18.456905 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.456872 2560 generic.go:358] "Generic (PLEG): container finished" podID="fccd021f-3f44-4d0a-9e6c-851f64dc17b6" containerID="f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0" exitCode=0 Apr 16 17:47:18.457019 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.456919 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" event={"ID":"fccd021f-3f44-4d0a-9e6c-851f64dc17b6","Type":"ContainerDied","Data":"f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0"} Apr 16 17:47:18.457019 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.456950 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" Apr 16 17:47:18.457019 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.456965 2560 scope.go:117] "RemoveContainer" containerID="f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0" Apr 16 17:47:18.457135 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.456955 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f8f4564d-78fjq" event={"ID":"fccd021f-3f44-4d0a-9e6c-851f64dc17b6","Type":"ContainerDied","Data":"d94fead6c7c7e7d027eb406229936aebd352e839e2222751b0a6412e5b71eeb7"} Apr 16 17:47:18.464959 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.464939 2560 scope.go:117] "RemoveContainer" containerID="f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0" Apr 16 17:47:18.465221 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:47:18.465202 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0\": container with ID starting with f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0 not found: ID does not exist" containerID="f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0" Apr 16 17:47:18.465274 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.465231 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0"} err="failed to get container status \"f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0\": rpc error: code = NotFound desc = could not find container \"f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0\": container with ID starting with f561c5b95ea9767b428ea6c29db96ab86fd9aaa10e405123b27cd822daa7bfb0 not found: ID does not exist" Apr 16 17:47:18.472455 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.472439 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" Apr 16 17:47:18.480665 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.480640 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-78fjq"] Apr 16 17:47:18.486323 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.486302 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-78fjq"] Apr 16 17:47:18.537294 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.537263 2560 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-cert\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:47:18.537294 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.537293 2560 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zjxj7\" (UniqueName: \"kubernetes.io/projected/fccd021f-3f44-4d0a-9e6c-851f64dc17b6-kube-api-access-zjxj7\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:47:18.599681 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:18.599655 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7f8f4564d-mf4p9"] Apr 16 17:47:18.602229 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:47:18.602196 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod065e4f24_ec52_416c_b2bc_0b390a2cca88.slice/crio-fe407eefc43e7e891a17be3bffcb75a074e78fca59ce001c67b74e5fa4699355 WatchSource:0}: Error finding container fe407eefc43e7e891a17be3bffcb75a074e78fca59ce001c67b74e5fa4699355: Status 404 returned error can't find the container with id fe407eefc43e7e891a17be3bffcb75a074e78fca59ce001c67b74e5fa4699355 Apr 16 17:47:19.462076 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:19.462042 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" event={"ID":"065e4f24-ec52-416c-b2bc-0b390a2cca88","Type":"ContainerStarted","Data":"21ebcf0cfced64542586a5521acf3458a568d705a59922a406fbfad3f3c74943"} Apr 16 17:47:19.462076 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:19.462082 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" event={"ID":"065e4f24-ec52-416c-b2bc-0b390a2cca88","Type":"ContainerStarted","Data":"fe407eefc43e7e891a17be3bffcb75a074e78fca59ce001c67b74e5fa4699355"} Apr 16 17:47:19.462520 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:19.462111 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" Apr 16 17:47:19.482580 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:19.482512 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" podStartSLOduration=1.120732508 podStartE2EDuration="1.482480826s" podCreationTimestamp="2026-04-16 17:47:18 +0000 UTC" firstStartedPulling="2026-04-16 17:47:18.603449423 +0000 UTC m=+406.966658275" lastFinishedPulling="2026-04-16 17:47:18.965197735 +0000 UTC m=+407.328406593" observedRunningTime="2026-04-16 17:47:19.479893093 +0000 UTC m=+407.843101963" watchObservedRunningTime="2026-04-16 17:47:19.482480826 +0000 UTC m=+407.845689700" Apr 16 17:47:20.161723 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:20.161690 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fccd021f-3f44-4d0a-9e6c-851f64dc17b6" path="/var/lib/kubelet/pods/fccd021f-3f44-4d0a-9e6c-851f64dc17b6/volumes" Apr 16 17:47:29.850696 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.850664 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-7d49b764cf-d8tht"] Apr 16 17:47:29.851106 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.851047 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fccd021f-3f44-4d0a-9e6c-851f64dc17b6" containerName="manager" Apr 16 17:47:29.851106 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.851060 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccd021f-3f44-4d0a-9e6c-851f64dc17b6" containerName="manager" Apr 16 17:47:29.851183 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.851127 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="fccd021f-3f44-4d0a-9e6c-851f64dc17b6" containerName="manager" Apr 16 17:47:29.853097 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.853077 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:29.864651 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.864627 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d49b764cf-d8tht"] Apr 16 17:47:29.934777 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.934745 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq4x8\" (UniqueName: \"kubernetes.io/projected/9f7d644b-bdf4-431e-acdc-20b326ef09bc-kube-api-access-jq4x8\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:29.934777 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.934782 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f7d644b-bdf4-431e-acdc-20b326ef09bc-console-serving-cert\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:29.935066 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.934799 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-console-config\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:29.935066 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.934828 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f7d644b-bdf4-431e-acdc-20b326ef09bc-console-oauth-config\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:29.935066 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.934935 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-oauth-serving-cert\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:29.935066 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.934998 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-service-ca\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:29.935066 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:29.935054 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-trusted-ca-bundle\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.036345 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.036297 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-oauth-serving-cert\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.036345 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.036365 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-service-ca\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.036602 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.036399 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-trusted-ca-bundle\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.036602 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.036426 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jq4x8\" (UniqueName: \"kubernetes.io/projected/9f7d644b-bdf4-431e-acdc-20b326ef09bc-kube-api-access-jq4x8\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.036602 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.036444 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f7d644b-bdf4-431e-acdc-20b326ef09bc-console-serving-cert\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.036602 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.036462 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-console-config\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.036809 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.036603 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f7d644b-bdf4-431e-acdc-20b326ef09bc-console-oauth-config\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.037138 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.037117 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-oauth-serving-cert\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.037244 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.037200 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-console-config\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.037296 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.037262 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-service-ca\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.037489 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.037470 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f7d644b-bdf4-431e-acdc-20b326ef09bc-trusted-ca-bundle\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.038875 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.038856 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f7d644b-bdf4-431e-acdc-20b326ef09bc-console-serving-cert\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.039060 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.039043 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9f7d644b-bdf4-431e-acdc-20b326ef09bc-console-oauth-config\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.046194 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.046172 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq4x8\" (UniqueName: \"kubernetes.io/projected/9f7d644b-bdf4-431e-acdc-20b326ef09bc-kube-api-access-jq4x8\") pod \"console-7d49b764cf-d8tht\" (UID: \"9f7d644b-bdf4-431e-acdc-20b326ef09bc\") " pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.161807 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.161732 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:30.289458 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.289328 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d49b764cf-d8tht"] Apr 16 17:47:30.292229 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:47:30.292199 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f7d644b_bdf4_431e_acdc_20b326ef09bc.slice/crio-05a6808f8530ac3f150523b5166f7a2ba4c25707b16b136c658aca924ecd3333 WatchSource:0}: Error finding container 05a6808f8530ac3f150523b5166f7a2ba4c25707b16b136c658aca924ecd3333: Status 404 returned error can't find the container with id 05a6808f8530ac3f150523b5166f7a2ba4c25707b16b136c658aca924ecd3333 Apr 16 17:47:30.493888 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.493789 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d49b764cf-d8tht" event={"ID":"9f7d644b-bdf4-431e-acdc-20b326ef09bc","Type":"ContainerStarted","Data":"4e67a310310ed2d80412019c04d9afed51df431c114a998a5ef61823f0b395a0"} Apr 16 17:47:30.493888 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.493824 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d49b764cf-d8tht" event={"ID":"9f7d644b-bdf4-431e-acdc-20b326ef09bc","Type":"ContainerStarted","Data":"05a6808f8530ac3f150523b5166f7a2ba4c25707b16b136c658aca924ecd3333"} Apr 16 17:47:30.514367 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:30.514309 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7d49b764cf-d8tht" podStartSLOduration=1.514269012 podStartE2EDuration="1.514269012s" podCreationTimestamp="2026-04-16 17:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:47:30.513287533 +0000 UTC m=+418.876496410" watchObservedRunningTime="2026-04-16 17:47:30.514269012 +0000 UTC m=+418.877477886" Apr 16 17:47:40.161889 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:40.161861 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:40.161889 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:40.161896 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:40.166305 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:40.166280 2560 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:40.524591 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:40.524509 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7d49b764cf-d8tht" Apr 16 17:47:40.581494 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:40.581457 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-766bd475d4-7tt9z"] Apr 16 17:47:50.470104 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:50.470065 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-7f8f4564d-mf4p9" Apr 16 17:47:51.953010 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:51.952881 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/model-serving-api-86f7b4b499-xh5jq"] Apr 16 17:47:51.956845 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:51.956811 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-xh5jq" Apr 16 17:47:51.959026 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:51.958976 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-dockercfg-87f9d\"" Apr 16 17:47:51.959517 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:51.959489 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-tls\"" Apr 16 17:47:51.968490 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:51.968464 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-xh5jq"] Apr 16 17:47:52.031854 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:52.031802 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsc5h\" (UniqueName: \"kubernetes.io/projected/cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7-kube-api-access-fsc5h\") pod \"model-serving-api-86f7b4b499-xh5jq\" (UID: \"cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7\") " pod="kserve/model-serving-api-86f7b4b499-xh5jq" Apr 16 17:47:52.032032 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:52.031895 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7-tls-certs\") pod \"model-serving-api-86f7b4b499-xh5jq\" (UID: \"cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7\") " pod="kserve/model-serving-api-86f7b4b499-xh5jq" Apr 16 17:47:52.132534 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:52.132493 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fsc5h\" (UniqueName: \"kubernetes.io/projected/cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7-kube-api-access-fsc5h\") pod \"model-serving-api-86f7b4b499-xh5jq\" (UID: \"cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7\") " pod="kserve/model-serving-api-86f7b4b499-xh5jq" Apr 16 17:47:52.132740 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:52.132544 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7-tls-certs\") pod \"model-serving-api-86f7b4b499-xh5jq\" (UID: \"cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7\") " pod="kserve/model-serving-api-86f7b4b499-xh5jq" Apr 16 17:47:52.134979 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:52.134958 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7-tls-certs\") pod \"model-serving-api-86f7b4b499-xh5jq\" (UID: \"cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7\") " pod="kserve/model-serving-api-86f7b4b499-xh5jq" Apr 16 17:47:52.142397 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:52.142370 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsc5h\" (UniqueName: \"kubernetes.io/projected/cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7-kube-api-access-fsc5h\") pod \"model-serving-api-86f7b4b499-xh5jq\" (UID: \"cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7\") " pod="kserve/model-serving-api-86f7b4b499-xh5jq" Apr 16 17:47:52.267910 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:52.267791 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-xh5jq" Apr 16 17:47:52.396865 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:52.396814 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-xh5jq"] Apr 16 17:47:52.400172 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:47:52.400111 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf9e0a47_57c9_43b5_9cb9_26f3f33bc4a7.slice/crio-f488ee554d7db8eb3aba341d96d6ba337348ec291911492a85a72fe7862c1709 WatchSource:0}: Error finding container f488ee554d7db8eb3aba341d96d6ba337348ec291911492a85a72fe7862c1709: Status 404 returned error can't find the container with id f488ee554d7db8eb3aba341d96d6ba337348ec291911492a85a72fe7862c1709 Apr 16 17:47:52.556235 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:52.556197 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-xh5jq" event={"ID":"cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7","Type":"ContainerStarted","Data":"f488ee554d7db8eb3aba341d96d6ba337348ec291911492a85a72fe7862c1709"} Apr 16 17:47:54.565638 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:54.565600 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-xh5jq" event={"ID":"cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7","Type":"ContainerStarted","Data":"1d9774eab5763cca0d48a1a8662d5371dcb17f8e5d4f7abaa510e5f7715509ef"} Apr 16 17:47:54.566240 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:54.566221 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/model-serving-api-86f7b4b499-xh5jq" Apr 16 17:47:54.592453 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:47:54.592398 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/model-serving-api-86f7b4b499-xh5jq" podStartSLOduration=1.927529802 podStartE2EDuration="3.592382668s" podCreationTimestamp="2026-04-16 17:47:51 +0000 UTC" firstStartedPulling="2026-04-16 17:47:52.402209757 +0000 UTC m=+440.765418608" lastFinishedPulling="2026-04-16 17:47:54.067062619 +0000 UTC m=+442.430271474" observedRunningTime="2026-04-16 17:47:54.591043364 +0000 UTC m=+442.954252255" watchObservedRunningTime="2026-04-16 17:47:54.592382668 +0000 UTC m=+442.955591584" Apr 16 17:48:05.605827 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.605760 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-766bd475d4-7tt9z" podUID="35d88534-6ec2-420e-91bc-e41c8e9a2909" containerName="console" containerID="cri-o://e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc" gracePeriod=15 Apr 16 17:48:05.843594 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.843569 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-766bd475d4-7tt9z_35d88534-6ec2-420e-91bc-e41c8e9a2909/console/0.log" Apr 16 17:48:05.843719 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.843635 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:48:05.953455 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953363 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-oauth-config\") pod \"35d88534-6ec2-420e-91bc-e41c8e9a2909\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " Apr 16 17:48:05.953455 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953400 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-trusted-ca-bundle\") pod \"35d88534-6ec2-420e-91bc-e41c8e9a2909\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " Apr 16 17:48:05.953455 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953429 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk29z\" (UniqueName: \"kubernetes.io/projected/35d88534-6ec2-420e-91bc-e41c8e9a2909-kube-api-access-gk29z\") pod \"35d88534-6ec2-420e-91bc-e41c8e9a2909\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " Apr 16 17:48:05.953455 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953451 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-oauth-serving-cert\") pod \"35d88534-6ec2-420e-91bc-e41c8e9a2909\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " Apr 16 17:48:05.953779 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953588 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-service-ca\") pod \"35d88534-6ec2-420e-91bc-e41c8e9a2909\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " Apr 16 17:48:05.953779 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953624 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-config\") pod \"35d88534-6ec2-420e-91bc-e41c8e9a2909\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " Apr 16 17:48:05.953919 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953878 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-serving-cert\") pod \"35d88534-6ec2-420e-91bc-e41c8e9a2909\" (UID: \"35d88534-6ec2-420e-91bc-e41c8e9a2909\") " Apr 16 17:48:05.953975 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953919 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "35d88534-6ec2-420e-91bc-e41c8e9a2909" (UID: "35d88534-6ec2-420e-91bc-e41c8e9a2909"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:48:05.954030 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953972 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "35d88534-6ec2-420e-91bc-e41c8e9a2909" (UID: "35d88534-6ec2-420e-91bc-e41c8e9a2909"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:48:05.954030 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953992 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-config" (OuterVolumeSpecName: "console-config") pod "35d88534-6ec2-420e-91bc-e41c8e9a2909" (UID: "35d88534-6ec2-420e-91bc-e41c8e9a2909"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:48:05.954030 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.953990 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-service-ca" (OuterVolumeSpecName: "service-ca") pod "35d88534-6ec2-420e-91bc-e41c8e9a2909" (UID: "35d88534-6ec2-420e-91bc-e41c8e9a2909"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:48:05.954215 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.954197 2560 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-trusted-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:48:05.954274 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.954221 2560 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-oauth-serving-cert\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:48:05.954274 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.954238 2560 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-service-ca\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:48:05.954274 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.954252 2560 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-config\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:48:05.955845 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.955814 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "35d88534-6ec2-420e-91bc-e41c8e9a2909" (UID: "35d88534-6ec2-420e-91bc-e41c8e9a2909"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:48:05.956170 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.956151 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "35d88534-6ec2-420e-91bc-e41c8e9a2909" (UID: "35d88534-6ec2-420e-91bc-e41c8e9a2909"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:48:05.956211 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:05.956166 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d88534-6ec2-420e-91bc-e41c8e9a2909-kube-api-access-gk29z" (OuterVolumeSpecName: "kube-api-access-gk29z") pod "35d88534-6ec2-420e-91bc-e41c8e9a2909" (UID: "35d88534-6ec2-420e-91bc-e41c8e9a2909"). InnerVolumeSpecName "kube-api-access-gk29z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 17:48:06.054826 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.054773 2560 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-serving-cert\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:48:06.054826 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.054816 2560 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35d88534-6ec2-420e-91bc-e41c8e9a2909-console-oauth-config\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:48:06.054826 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.054856 2560 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gk29z\" (UniqueName: \"kubernetes.io/projected/35d88534-6ec2-420e-91bc-e41c8e9a2909-kube-api-access-gk29z\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:48:06.575754 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.575725 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/model-serving-api-86f7b4b499-xh5jq" Apr 16 17:48:06.599423 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.599396 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-766bd475d4-7tt9z_35d88534-6ec2-420e-91bc-e41c8e9a2909/console/0.log" Apr 16 17:48:06.599596 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.599435 2560 generic.go:358] "Generic (PLEG): container finished" podID="35d88534-6ec2-420e-91bc-e41c8e9a2909" containerID="e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc" exitCode=2 Apr 16 17:48:06.599596 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.599470 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-766bd475d4-7tt9z" event={"ID":"35d88534-6ec2-420e-91bc-e41c8e9a2909","Type":"ContainerDied","Data":"e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc"} Apr 16 17:48:06.599596 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.599496 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-766bd475d4-7tt9z" Apr 16 17:48:06.599596 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.599512 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-766bd475d4-7tt9z" event={"ID":"35d88534-6ec2-420e-91bc-e41c8e9a2909","Type":"ContainerDied","Data":"18cb055e9987fb7f109e249ac7c59440368ab30b7276ebe4d5cfa3f1efb64cb1"} Apr 16 17:48:06.599596 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.599530 2560 scope.go:117] "RemoveContainer" containerID="e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc" Apr 16 17:48:06.608757 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.608463 2560 scope.go:117] "RemoveContainer" containerID="e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc" Apr 16 17:48:06.608757 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:48:06.608742 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc\": container with ID starting with e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc not found: ID does not exist" containerID="e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc" Apr 16 17:48:06.609066 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.608769 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc"} err="failed to get container status \"e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc\": rpc error: code = NotFound desc = could not find container \"e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc\": container with ID starting with e212d4f2d3c150048ee309dd11470cede2542c562839c83385550b0e50e1a0bc not found: ID does not exist" Apr 16 17:48:06.626707 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.626677 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-766bd475d4-7tt9z"] Apr 16 17:48:06.634344 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:06.634318 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-766bd475d4-7tt9z"] Apr 16 17:48:08.161986 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:08.161950 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35d88534-6ec2-420e-91bc-e41c8e9a2909" path="/var/lib/kubelet/pods/35d88534-6ec2-420e-91bc-e41c8e9a2909/volumes" Apr 16 17:48:28.302405 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.302369 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff"] Apr 16 17:48:28.302861 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.302690 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35d88534-6ec2-420e-91bc-e41c8e9a2909" containerName="console" Apr 16 17:48:28.302861 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.302702 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d88534-6ec2-420e-91bc-e41c8e9a2909" containerName="console" Apr 16 17:48:28.302861 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.302769 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="35d88534-6ec2-420e-91bc-e41c8e9a2909" containerName="console" Apr 16 17:48:28.304635 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.304615 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" Apr 16 17:48:28.306946 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.306925 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-dq85w\"" Apr 16 17:48:28.314404 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.314383 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" Apr 16 17:48:28.318967 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.318944 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff"] Apr 16 17:48:28.458179 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.458117 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff"] Apr 16 17:48:28.464951 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:48:28.464907 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f2c2967_f958_4eab_aea7_0a669d53a4cc.slice/crio-dcac9d6f45e936ee23ba364ca531e72bc726579f5a360468c8f76dc25f85c538 WatchSource:0}: Error finding container dcac9d6f45e936ee23ba364ca531e72bc726579f5a360468c8f76dc25f85c538: Status 404 returned error can't find the container with id dcac9d6f45e936ee23ba364ca531e72bc726579f5a360468c8f76dc25f85c538 Apr 16 17:48:28.557880 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.557763 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6"] Apr 16 17:48:28.560526 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.560503 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" Apr 16 17:48:28.570967 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.570940 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6"] Apr 16 17:48:28.573349 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.573327 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" Apr 16 17:48:28.666471 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.666439 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" event={"ID":"4f2c2967-f958-4eab-aea7-0a669d53a4cc","Type":"ContainerStarted","Data":"dcac9d6f45e936ee23ba364ca531e72bc726579f5a360468c8f76dc25f85c538"} Apr 16 17:48:28.710039 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:28.710014 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6"] Apr 16 17:48:28.712665 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:48:28.712636 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4d49867_4354_44b1_ad35_b63e9f5527ef.slice/crio-c2874673ed8760ad534abf4749d47b4f52b926af4a82f0358949ad0e529d43b2 WatchSource:0}: Error finding container c2874673ed8760ad534abf4749d47b4f52b926af4a82f0358949ad0e529d43b2: Status 404 returned error can't find the container with id c2874673ed8760ad534abf4749d47b4f52b926af4a82f0358949ad0e529d43b2 Apr 16 17:48:29.679457 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:29.679420 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" event={"ID":"a4d49867-4354-44b1-ad35-b63e9f5527ef","Type":"ContainerStarted","Data":"c2874673ed8760ad534abf4749d47b4f52b926af4a82f0358949ad0e529d43b2"} Apr 16 17:48:43.741150 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:43.741117 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" event={"ID":"4f2c2967-f958-4eab-aea7-0a669d53a4cc","Type":"ContainerStarted","Data":"d4ce9cf9703142db7a12aca625302c569cb167e0bf498d73596de62312b61028"} Apr 16 17:48:43.741598 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:43.741285 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" Apr 16 17:48:43.742479 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:43.742413 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 17:48:43.742613 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:43.742507 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" event={"ID":"a4d49867-4354-44b1-ad35-b63e9f5527ef","Type":"ContainerStarted","Data":"6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5"} Apr 16 17:48:43.742728 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:43.742713 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" Apr 16 17:48:43.743735 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:43.743712 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.28:8080: connect: connection refused" Apr 16 17:48:43.762029 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:43.761982 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" podStartSLOduration=0.639623854 podStartE2EDuration="15.761951283s" podCreationTimestamp="2026-04-16 17:48:28 +0000 UTC" firstStartedPulling="2026-04-16 17:48:28.466892651 +0000 UTC m=+476.830101503" lastFinishedPulling="2026-04-16 17:48:43.589220076 +0000 UTC m=+491.952428932" observedRunningTime="2026-04-16 17:48:43.760167005 +0000 UTC m=+492.123375874" watchObservedRunningTime="2026-04-16 17:48:43.761951283 +0000 UTC m=+492.125160155" Apr 16 17:48:43.776436 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:43.776381 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" podStartSLOduration=0.899181557 podStartE2EDuration="15.776362033s" podCreationTimestamp="2026-04-16 17:48:28 +0000 UTC" firstStartedPulling="2026-04-16 17:48:28.714504517 +0000 UTC m=+477.077713371" lastFinishedPulling="2026-04-16 17:48:43.591684984 +0000 UTC m=+491.954893847" observedRunningTime="2026-04-16 17:48:43.775546145 +0000 UTC m=+492.138755029" watchObservedRunningTime="2026-04-16 17:48:43.776362033 +0000 UTC m=+492.139570907" Apr 16 17:48:44.745582 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:44.745542 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 17:48:44.745998 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:44.745541 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.28:8080: connect: connection refused" Apr 16 17:48:54.745979 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:54.745926 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 17:48:54.746527 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:48:54.745926 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.28:8080: connect: connection refused" Apr 16 17:49:04.746208 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:04.746161 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 17:49:04.746650 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:04.746161 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.28:8080: connect: connection refused" Apr 16 17:49:14.746231 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:14.746185 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.28:8080: connect: connection refused" Apr 16 17:49:14.746655 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:14.746188 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 17:49:24.745951 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:24.745905 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 17:49:24.746359 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:24.745910 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.28:8080: connect: connection refused" Apr 16 17:49:34.746960 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:34.746923 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" Apr 16 17:49:34.747448 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:34.746980 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" Apr 16 17:49:48.254807 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.254774 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28"] Apr 16 17:49:48.257062 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.257039 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:48.259307 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.259286 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-d60f9-kube-rbac-proxy-sar-config\"" Apr 16 17:49:48.259465 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.259439 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-d60f9-serving-cert\"" Apr 16 17:49:48.259568 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.259550 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 17:49:48.268611 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.268591 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28"] Apr 16 17:49:48.331363 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.331331 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3345376c-5e62-45e9-8781-1a5a06db51ea-proxy-tls\") pod \"switch-graph-d60f9-db987dd58-m2b28\" (UID: \"3345376c-5e62-45e9-8781-1a5a06db51ea\") " pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:48.331498 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.331400 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3345376c-5e62-45e9-8781-1a5a06db51ea-openshift-service-ca-bundle\") pod \"switch-graph-d60f9-db987dd58-m2b28\" (UID: \"3345376c-5e62-45e9-8781-1a5a06db51ea\") " pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:48.432641 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.432605 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3345376c-5e62-45e9-8781-1a5a06db51ea-proxy-tls\") pod \"switch-graph-d60f9-db987dd58-m2b28\" (UID: \"3345376c-5e62-45e9-8781-1a5a06db51ea\") " pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:48.432811 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.432670 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3345376c-5e62-45e9-8781-1a5a06db51ea-openshift-service-ca-bundle\") pod \"switch-graph-d60f9-db987dd58-m2b28\" (UID: \"3345376c-5e62-45e9-8781-1a5a06db51ea\") " pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:48.432811 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:49:48.432742 2560 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-d60f9-serving-cert: secret "switch-graph-d60f9-serving-cert" not found Apr 16 17:49:48.432937 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:49:48.432813 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3345376c-5e62-45e9-8781-1a5a06db51ea-proxy-tls podName:3345376c-5e62-45e9-8781-1a5a06db51ea nodeName:}" failed. No retries permitted until 2026-04-16 17:49:48.932795565 +0000 UTC m=+557.296004415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/3345376c-5e62-45e9-8781-1a5a06db51ea-proxy-tls") pod "switch-graph-d60f9-db987dd58-m2b28" (UID: "3345376c-5e62-45e9-8781-1a5a06db51ea") : secret "switch-graph-d60f9-serving-cert" not found Apr 16 17:49:48.433299 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.433280 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3345376c-5e62-45e9-8781-1a5a06db51ea-openshift-service-ca-bundle\") pod \"switch-graph-d60f9-db987dd58-m2b28\" (UID: \"3345376c-5e62-45e9-8781-1a5a06db51ea\") " pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:48.937095 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.937054 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3345376c-5e62-45e9-8781-1a5a06db51ea-proxy-tls\") pod \"switch-graph-d60f9-db987dd58-m2b28\" (UID: \"3345376c-5e62-45e9-8781-1a5a06db51ea\") " pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:48.939535 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:48.939510 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3345376c-5e62-45e9-8781-1a5a06db51ea-proxy-tls\") pod \"switch-graph-d60f9-db987dd58-m2b28\" (UID: \"3345376c-5e62-45e9-8781-1a5a06db51ea\") " pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:49.167449 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:49.167415 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:49.297999 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:49.297966 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28"] Apr 16 17:49:49.301399 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:49:49.301368 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3345376c_5e62_45e9_8781_1a5a06db51ea.slice/crio-1c18bc775920d6a100a2c63575973f5b8efddbb863279f9554a48e952edd9a2f WatchSource:0}: Error finding container 1c18bc775920d6a100a2c63575973f5b8efddbb863279f9554a48e952edd9a2f: Status 404 returned error can't find the container with id 1c18bc775920d6a100a2c63575973f5b8efddbb863279f9554a48e952edd9a2f Apr 16 17:49:49.941966 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:49.941924 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" event={"ID":"3345376c-5e62-45e9-8781-1a5a06db51ea","Type":"ContainerStarted","Data":"1c18bc775920d6a100a2c63575973f5b8efddbb863279f9554a48e952edd9a2f"} Apr 16 17:49:51.949163 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:51.949122 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" event={"ID":"3345376c-5e62-45e9-8781-1a5a06db51ea","Type":"ContainerStarted","Data":"e249d4f8b2f39db4006a8d6ca02d733f5249be221e5e9169eddbb2a31c57385b"} Apr 16 17:49:51.949614 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:51.949182 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:51.967769 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:51.967718 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" podStartSLOduration=1.417522306 podStartE2EDuration="3.967700729s" podCreationTimestamp="2026-04-16 17:49:48 +0000 UTC" firstStartedPulling="2026-04-16 17:49:49.303153495 +0000 UTC m=+557.666362346" lastFinishedPulling="2026-04-16 17:49:51.853331913 +0000 UTC m=+560.216540769" observedRunningTime="2026-04-16 17:49:51.966874923 +0000 UTC m=+560.330083790" watchObservedRunningTime="2026-04-16 17:49:51.967700729 +0000 UTC m=+560.330909603" Apr 16 17:49:57.958053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:57.958022 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:49:58.545016 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.544977 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28"] Apr 16 17:49:58.545215 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.545191 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerName="switch-graph-d60f9" containerID="cri-o://e249d4f8b2f39db4006a8d6ca02d733f5249be221e5e9169eddbb2a31c57385b" gracePeriod=30 Apr 16 17:49:58.644741 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.644706 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6"] Apr 16 17:49:58.645031 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.645003 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerName="kserve-container" containerID="cri-o://6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5" gracePeriod=30 Apr 16 17:49:58.729154 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.729113 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff"] Apr 16 17:49:58.729425 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.729396 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerName="kserve-container" containerID="cri-o://d4ce9cf9703142db7a12aca625302c569cb167e0bf498d73596de62312b61028" gracePeriod=30 Apr 16 17:49:58.786806 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.786764 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4"] Apr 16 17:49:58.789101 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.789076 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" Apr 16 17:49:58.798873 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.798851 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" Apr 16 17:49:58.808358 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.808329 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4"] Apr 16 17:49:58.920739 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.920709 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b"] Apr 16 17:49:58.923434 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.923410 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" Apr 16 17:49:58.933633 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.933605 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b"] Apr 16 17:49:58.936120 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.936096 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" Apr 16 17:49:58.948642 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.948611 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4"] Apr 16 17:49:58.953561 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:49:58.953526 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52d14f4e_61a3_47b0_a0b7_ec356e3847f7.slice/crio-fda824ee328cf721f6a6027a8adf4463367dde0a978c868f535a865a3fceb139 WatchSource:0}: Error finding container fda824ee328cf721f6a6027a8adf4463367dde0a978c868f535a865a3fceb139: Status 404 returned error can't find the container with id fda824ee328cf721f6a6027a8adf4463367dde0a978c868f535a865a3fceb139 Apr 16 17:49:58.974196 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:58.974158 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" event={"ID":"52d14f4e-61a3-47b0-a0b7-ec356e3847f7","Type":"ContainerStarted","Data":"fda824ee328cf721f6a6027a8adf4463367dde0a978c868f535a865a3fceb139"} Apr 16 17:49:59.110484 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:59.110396 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b"] Apr 16 17:49:59.113878 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:49:59.113824 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1fc29ed_9974_4a56_a378_2f8018da4ee1.slice/crio-24af753457977b49e10a30e2f8e079d3095518f7f36331851c5772c56d31e1fa WatchSource:0}: Error finding container 24af753457977b49e10a30e2f8e079d3095518f7f36331851c5772c56d31e1fa: Status 404 returned error can't find the container with id 24af753457977b49e10a30e2f8e079d3095518f7f36331851c5772c56d31e1fa Apr 16 17:49:59.978748 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:59.978712 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" event={"ID":"b1fc29ed-9974-4a56-a378-2f8018da4ee1","Type":"ContainerStarted","Data":"b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9"} Apr 16 17:49:59.978748 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:59.978749 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" event={"ID":"b1fc29ed-9974-4a56-a378-2f8018da4ee1","Type":"ContainerStarted","Data":"24af753457977b49e10a30e2f8e079d3095518f7f36331851c5772c56d31e1fa"} Apr 16 17:49:59.979291 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:59.978767 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" Apr 16 17:49:59.980257 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:59.980221 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" event={"ID":"52d14f4e-61a3-47b0-a0b7-ec356e3847f7","Type":"ContainerStarted","Data":"40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4"} Apr 16 17:49:59.980377 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:59.980292 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 17:49:59.980435 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:59.980412 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" Apr 16 17:49:59.981447 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:49:59.981427 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.30:8080: connect: connection refused" Apr 16 17:50:00.008053 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:00.008008 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" podStartSLOduration=2.007993159 podStartE2EDuration="2.007993159s" podCreationTimestamp="2026-04-16 17:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:50:00.007902639 +0000 UTC m=+568.371111513" watchObservedRunningTime="2026-04-16 17:50:00.007993159 +0000 UTC m=+568.371202031" Apr 16 17:50:00.033485 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:00.033436 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" podStartSLOduration=2.033419786 podStartE2EDuration="2.033419786s" podCreationTimestamp="2026-04-16 17:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:50:00.032896951 +0000 UTC m=+568.396105823" watchObservedRunningTime="2026-04-16 17:50:00.033419786 +0000 UTC m=+568.396628660" Apr 16 17:50:00.987359 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:00.987311 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 17:50:00.987822 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:00.987441 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.30:8080: connect: connection refused" Apr 16 17:50:02.689972 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:02.689945 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" Apr 16 17:50:02.957994 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:02.957951 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerName="switch-graph-d60f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:50:02.993084 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:02.993051 2560 generic.go:358] "Generic (PLEG): container finished" podID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerID="d4ce9cf9703142db7a12aca625302c569cb167e0bf498d73596de62312b61028" exitCode=0 Apr 16 17:50:02.993271 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:02.993132 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" event={"ID":"4f2c2967-f958-4eab-aea7-0a669d53a4cc","Type":"ContainerDied","Data":"d4ce9cf9703142db7a12aca625302c569cb167e0bf498d73596de62312b61028"} Apr 16 17:50:02.994276 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:02.994248 2560 generic.go:358] "Generic (PLEG): container finished" podID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerID="6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5" exitCode=0 Apr 16 17:50:02.994433 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:02.994317 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" Apr 16 17:50:02.994433 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:02.994328 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" event={"ID":"a4d49867-4354-44b1-ad35-b63e9f5527ef","Type":"ContainerDied","Data":"6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5"} Apr 16 17:50:02.994433 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:02.994372 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6" event={"ID":"a4d49867-4354-44b1-ad35-b63e9f5527ef","Type":"ContainerDied","Data":"c2874673ed8760ad534abf4749d47b4f52b926af4a82f0358949ad0e529d43b2"} Apr 16 17:50:02.994433 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:02.994398 2560 scope.go:117] "RemoveContainer" containerID="6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5" Apr 16 17:50:03.004503 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:03.004486 2560 scope.go:117] "RemoveContainer" containerID="6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5" Apr 16 17:50:03.004822 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:50:03.004796 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5\": container with ID starting with 6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5 not found: ID does not exist" containerID="6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5" Apr 16 17:50:03.004920 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:03.004849 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5"} err="failed to get container status \"6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5\": rpc error: code = NotFound desc = could not find container \"6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5\": container with ID starting with 6bf2b7e470718a376090571714fa57ac437a580ef90441f946ef17301ed45ea5 not found: ID does not exist" Apr 16 17:50:03.022129 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:03.022101 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6"] Apr 16 17:50:03.027291 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:03.027263 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d60f9-predictor-bcfc89bfc-f6jf6"] Apr 16 17:50:03.073742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:03.073720 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" Apr 16 17:50:03.998340 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:03.998312 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" Apr 16 17:50:03.998791 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:03.998310 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff" event={"ID":"4f2c2967-f958-4eab-aea7-0a669d53a4cc","Type":"ContainerDied","Data":"dcac9d6f45e936ee23ba364ca531e72bc726579f5a360468c8f76dc25f85c538"} Apr 16 17:50:03.998791 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:03.998443 2560 scope.go:117] "RemoveContainer" containerID="d4ce9cf9703142db7a12aca625302c569cb167e0bf498d73596de62312b61028" Apr 16 17:50:04.025295 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:04.025262 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff"] Apr 16 17:50:04.031200 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:04.031173 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d60f9-predictor-54b667d74d-nv5ff"] Apr 16 17:50:04.162280 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:04.162247 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" path="/var/lib/kubelet/pods/4f2c2967-f958-4eab-aea7-0a669d53a4cc/volumes" Apr 16 17:50:04.162524 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:04.162511 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" path="/var/lib/kubelet/pods/a4d49867-4354-44b1-ad35-b63e9f5527ef/volumes" Apr 16 17:50:07.957046 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:07.957005 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerName="switch-graph-d60f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:50:10.986584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:10.986532 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.30:8080: connect: connection refused" Apr 16 17:50:10.987019 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:10.986717 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 17:50:12.956381 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:12.956339 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerName="switch-graph-d60f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:50:12.956762 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:12.956458 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:50:17.956788 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:17.956747 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerName="switch-graph-d60f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:50:20.986089 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:20.986038 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.30:8080: connect: connection refused" Apr 16 17:50:20.986608 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:20.986576 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 17:50:22.957017 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:22.956976 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerName="switch-graph-d60f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:50:27.956380 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:27.956344 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerName="switch-graph-d60f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:50:29.076108 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:29.076070 2560 generic.go:358] "Generic (PLEG): container finished" podID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerID="e249d4f8b2f39db4006a8d6ca02d733f5249be221e5e9169eddbb2a31c57385b" exitCode=0 Apr 16 17:50:29.076498 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:29.076126 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" event={"ID":"3345376c-5e62-45e9-8781-1a5a06db51ea","Type":"ContainerDied","Data":"e249d4f8b2f39db4006a8d6ca02d733f5249be221e5e9169eddbb2a31c57385b"} Apr 16 17:50:29.185433 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:29.185405 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:50:29.275698 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:29.275661 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3345376c-5e62-45e9-8781-1a5a06db51ea-proxy-tls\") pod \"3345376c-5e62-45e9-8781-1a5a06db51ea\" (UID: \"3345376c-5e62-45e9-8781-1a5a06db51ea\") " Apr 16 17:50:29.275907 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:29.275779 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3345376c-5e62-45e9-8781-1a5a06db51ea-openshift-service-ca-bundle\") pod \"3345376c-5e62-45e9-8781-1a5a06db51ea\" (UID: \"3345376c-5e62-45e9-8781-1a5a06db51ea\") " Apr 16 17:50:29.276134 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:29.276104 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3345376c-5e62-45e9-8781-1a5a06db51ea-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "3345376c-5e62-45e9-8781-1a5a06db51ea" (UID: "3345376c-5e62-45e9-8781-1a5a06db51ea"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:50:29.277951 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:29.277916 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3345376c-5e62-45e9-8781-1a5a06db51ea-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "3345376c-5e62-45e9-8781-1a5a06db51ea" (UID: "3345376c-5e62-45e9-8781-1a5a06db51ea"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:50:29.376705 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:29.376607 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3345376c-5e62-45e9-8781-1a5a06db51ea-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:50:29.376705 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:29.376645 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3345376c-5e62-45e9-8781-1a5a06db51ea-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:50:30.080316 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:30.080284 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" Apr 16 17:50:30.080316 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:30.080297 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28" event={"ID":"3345376c-5e62-45e9-8781-1a5a06db51ea","Type":"ContainerDied","Data":"1c18bc775920d6a100a2c63575973f5b8efddbb863279f9554a48e952edd9a2f"} Apr 16 17:50:30.080866 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:30.080345 2560 scope.go:117] "RemoveContainer" containerID="e249d4f8b2f39db4006a8d6ca02d733f5249be221e5e9169eddbb2a31c57385b" Apr 16 17:50:30.106757 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:30.106724 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28"] Apr 16 17:50:30.113095 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:30.113073 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d60f9-db987dd58-m2b28"] Apr 16 17:50:30.161584 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:30.161554 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" path="/var/lib/kubelet/pods/3345376c-5e62-45e9-8781-1a5a06db51ea/volumes" Apr 16 17:50:30.986335 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:30.986292 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.30:8080: connect: connection refused" Apr 16 17:50:30.986609 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:30.986584 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 17:50:38.273995 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.273965 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw"] Apr 16 17:50:38.276331 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.274312 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerName="switch-graph-d60f9" Apr 16 17:50:38.276331 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.274324 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerName="switch-graph-d60f9" Apr 16 17:50:38.276331 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.274342 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerName="kserve-container" Apr 16 17:50:38.276331 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.274347 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerName="kserve-container" Apr 16 17:50:38.276331 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.274361 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerName="kserve-container" Apr 16 17:50:38.276331 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.274366 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerName="kserve-container" Apr 16 17:50:38.276331 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.274411 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="3345376c-5e62-45e9-8781-1a5a06db51ea" containerName="switch-graph-d60f9" Apr 16 17:50:38.276331 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.274421 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4d49867-4354-44b1-ad35-b63e9f5527ef" containerName="kserve-container" Apr 16 17:50:38.276331 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.274427 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f2c2967-f958-4eab-aea7-0a669d53a4cc" containerName="kserve-container" Apr 16 17:50:38.277210 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.277194 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:38.279560 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.279536 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-serving-cert\"" Apr 16 17:50:38.279663 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.279581 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 17:50:38.279920 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.279904 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-kube-rbac-proxy-sar-config\"" Apr 16 17:50:38.287675 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.287654 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw"] Apr 16 17:50:38.454892 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.454858 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5da22b6a-8c07-4695-80ec-445d2e8217c5-proxy-tls\") pod \"model-chainer-884b78d8f-rz4zw\" (UID: \"5da22b6a-8c07-4695-80ec-445d2e8217c5\") " pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:38.455055 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.454901 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da22b6a-8c07-4695-80ec-445d2e8217c5-openshift-service-ca-bundle\") pod \"model-chainer-884b78d8f-rz4zw\" (UID: \"5da22b6a-8c07-4695-80ec-445d2e8217c5\") " pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:38.555998 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.555964 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5da22b6a-8c07-4695-80ec-445d2e8217c5-proxy-tls\") pod \"model-chainer-884b78d8f-rz4zw\" (UID: \"5da22b6a-8c07-4695-80ec-445d2e8217c5\") " pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:38.556203 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.556018 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da22b6a-8c07-4695-80ec-445d2e8217c5-openshift-service-ca-bundle\") pod \"model-chainer-884b78d8f-rz4zw\" (UID: \"5da22b6a-8c07-4695-80ec-445d2e8217c5\") " pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:38.556203 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:50:38.556124 2560 secret.go:189] Couldn't get secret kserve-ci-e2e-test/model-chainer-serving-cert: secret "model-chainer-serving-cert" not found Apr 16 17:50:38.556327 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:50:38.556207 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5da22b6a-8c07-4695-80ec-445d2e8217c5-proxy-tls podName:5da22b6a-8c07-4695-80ec-445d2e8217c5 nodeName:}" failed. No retries permitted until 2026-04-16 17:50:39.056189586 +0000 UTC m=+607.419398437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5da22b6a-8c07-4695-80ec-445d2e8217c5-proxy-tls") pod "model-chainer-884b78d8f-rz4zw" (UID: "5da22b6a-8c07-4695-80ec-445d2e8217c5") : secret "model-chainer-serving-cert" not found Apr 16 17:50:38.556688 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:38.556662 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da22b6a-8c07-4695-80ec-445d2e8217c5-openshift-service-ca-bundle\") pod \"model-chainer-884b78d8f-rz4zw\" (UID: \"5da22b6a-8c07-4695-80ec-445d2e8217c5\") " pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:39.061047 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:39.061015 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5da22b6a-8c07-4695-80ec-445d2e8217c5-proxy-tls\") pod \"model-chainer-884b78d8f-rz4zw\" (UID: \"5da22b6a-8c07-4695-80ec-445d2e8217c5\") " pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:39.063386 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:39.063355 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5da22b6a-8c07-4695-80ec-445d2e8217c5-proxy-tls\") pod \"model-chainer-884b78d8f-rz4zw\" (UID: \"5da22b6a-8c07-4695-80ec-445d2e8217c5\") " pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:39.188450 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:39.188419 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:39.315517 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:39.315440 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw"] Apr 16 17:50:39.318739 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:50:39.318711 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5da22b6a_8c07_4695_80ec_445d2e8217c5.slice/crio-f7b176395a54f68344cc5d69508297e2c85957c16de931fdd00672310e4ba459 WatchSource:0}: Error finding container f7b176395a54f68344cc5d69508297e2c85957c16de931fdd00672310e4ba459: Status 404 returned error can't find the container with id f7b176395a54f68344cc5d69508297e2c85957c16de931fdd00672310e4ba459 Apr 16 17:50:40.109921 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:40.109877 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" event={"ID":"5da22b6a-8c07-4695-80ec-445d2e8217c5","Type":"ContainerStarted","Data":"29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173"} Apr 16 17:50:40.109921 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:40.109926 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" event={"ID":"5da22b6a-8c07-4695-80ec-445d2e8217c5","Type":"ContainerStarted","Data":"f7b176395a54f68344cc5d69508297e2c85957c16de931fdd00672310e4ba459"} Apr 16 17:50:40.110177 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:40.110013 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:40.135456 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:40.135386 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" podStartSLOduration=2.135367109 podStartE2EDuration="2.135367109s" podCreationTimestamp="2026-04-16 17:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:50:40.135097532 +0000 UTC m=+608.498306407" watchObservedRunningTime="2026-04-16 17:50:40.135367109 +0000 UTC m=+608.498575982" Apr 16 17:50:40.986020 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:40.985968 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.30:8080: connect: connection refused" Apr 16 17:50:40.986684 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:40.986653 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 17:50:46.119449 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:46.119419 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:50:48.357780 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.357745 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw"] Apr 16 17:50:48.358176 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.358026 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerName="model-chainer" containerID="cri-o://29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173" gracePeriod=30 Apr 16 17:50:48.519136 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.519103 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv"] Apr 16 17:50:48.522785 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.522763 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" Apr 16 17:50:48.532166 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.532137 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv"] Apr 16 17:50:48.532649 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.532635 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" Apr 16 17:50:48.630236 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.630193 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg"] Apr 16 17:50:48.635625 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.635603 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" Apr 16 17:50:48.640342 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.640303 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg"] Apr 16 17:50:48.647828 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.647790 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" Apr 16 17:50:48.679748 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.679697 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv"] Apr 16 17:50:48.683267 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:50:48.683217 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9295a82_3f8c_40a9_aacc_33edd8f703e3.slice/crio-ec9b8e8203942708886c1c08f1b82b1d9f0ae2ab3f2fe2ca6d6c28487231d5c1 WatchSource:0}: Error finding container ec9b8e8203942708886c1c08f1b82b1d9f0ae2ab3f2fe2ca6d6c28487231d5c1: Status 404 returned error can't find the container with id ec9b8e8203942708886c1c08f1b82b1d9f0ae2ab3f2fe2ca6d6c28487231d5c1 Apr 16 17:50:48.684997 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.684977 2560 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 17:50:48.799466 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:48.799431 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg"] Apr 16 17:50:48.802487 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:50:48.802454 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod098054f0_d166_4d20_89b1_6e7fefa70d0a.slice/crio-4599fcb653a2c53840d4775ffeb20fa82927828fe8f50e07c68a20276f1934da WatchSource:0}: Error finding container 4599fcb653a2c53840d4775ffeb20fa82927828fe8f50e07c68a20276f1934da: Status 404 returned error can't find the container with id 4599fcb653a2c53840d4775ffeb20fa82927828fe8f50e07c68a20276f1934da Apr 16 17:50:49.139028 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:49.138990 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" event={"ID":"098054f0-d166-4d20-89b1-6e7fefa70d0a","Type":"ContainerStarted","Data":"e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714"} Apr 16 17:50:49.139028 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:49.139029 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" event={"ID":"098054f0-d166-4d20-89b1-6e7fefa70d0a","Type":"ContainerStarted","Data":"4599fcb653a2c53840d4775ffeb20fa82927828fe8f50e07c68a20276f1934da"} Apr 16 17:50:49.139253 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:49.139190 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" Apr 16 17:50:49.140338 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:49.140304 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" event={"ID":"b9295a82-3f8c-40a9-aacc-33edd8f703e3","Type":"ContainerStarted","Data":"4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274"} Apr 16 17:50:49.140338 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:49.140336 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" event={"ID":"b9295a82-3f8c-40a9-aacc-33edd8f703e3","Type":"ContainerStarted","Data":"ec9b8e8203942708886c1c08f1b82b1d9f0ae2ab3f2fe2ca6d6c28487231d5c1"} Apr 16 17:50:49.140523 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:49.140509 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" Apr 16 17:50:49.140872 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:49.140820 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:50:49.141402 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:49.141381 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:50:49.156430 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:49.156389 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" podStartSLOduration=1.156376337 podStartE2EDuration="1.156376337s" podCreationTimestamp="2026-04-16 17:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:50:49.155546524 +0000 UTC m=+617.518755394" watchObservedRunningTime="2026-04-16 17:50:49.156376337 +0000 UTC m=+617.519585188" Apr 16 17:50:49.172221 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:49.172137 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" podStartSLOduration=1.172124488 podStartE2EDuration="1.172124488s" podCreationTimestamp="2026-04-16 17:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:50:49.171145236 +0000 UTC m=+617.534354109" watchObservedRunningTime="2026-04-16 17:50:49.172124488 +0000 UTC m=+617.535333361" Apr 16 17:50:50.144059 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:50.144023 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:50:50.144423 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:50.144033 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:50:50.987034 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:50.987005 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" Apr 16 17:50:50.988247 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:50.988221 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" Apr 16 17:50:51.118291 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:51.118253 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:50:56.118223 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:50:56.118187 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:51:00.144907 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:00.144859 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:51:00.145309 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:00.144861 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:51:01.118386 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:01.118341 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:51:01.118560 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:01.118440 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:51:06.118629 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:06.118591 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:51:08.825982 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:08.825948 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf"] Apr 16 17:51:08.829279 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:08.829263 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:08.831391 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:08.831361 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-8e251-kube-rbac-proxy-sar-config\"" Apr 16 17:51:08.831391 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:08.831371 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-8e251-serving-cert\"" Apr 16 17:51:08.838620 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:08.838599 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf"] Apr 16 17:51:08.916984 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:08.916950 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070fed0c-aefb-4bc3-8855-8e159dd1164d-openshift-service-ca-bundle\") pod \"switch-graph-8e251-5896fcfc4c-jr8mf\" (UID: \"070fed0c-aefb-4bc3-8855-8e159dd1164d\") " pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:08.917141 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:08.917013 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls\") pod \"switch-graph-8e251-5896fcfc4c-jr8mf\" (UID: \"070fed0c-aefb-4bc3-8855-8e159dd1164d\") " pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:09.018092 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:09.018059 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070fed0c-aefb-4bc3-8855-8e159dd1164d-openshift-service-ca-bundle\") pod \"switch-graph-8e251-5896fcfc4c-jr8mf\" (UID: \"070fed0c-aefb-4bc3-8855-8e159dd1164d\") " pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:09.018271 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:09.018165 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls\") pod \"switch-graph-8e251-5896fcfc4c-jr8mf\" (UID: \"070fed0c-aefb-4bc3-8855-8e159dd1164d\") " pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:09.018361 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:51:09.018335 2560 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-8e251-serving-cert: secret "switch-graph-8e251-serving-cert" not found Apr 16 17:51:09.018435 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:51:09.018424 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls podName:070fed0c-aefb-4bc3-8855-8e159dd1164d nodeName:}" failed. No retries permitted until 2026-04-16 17:51:09.518399705 +0000 UTC m=+637.881608575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls") pod "switch-graph-8e251-5896fcfc4c-jr8mf" (UID: "070fed0c-aefb-4bc3-8855-8e159dd1164d") : secret "switch-graph-8e251-serving-cert" not found Apr 16 17:51:09.018692 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:09.018671 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070fed0c-aefb-4bc3-8855-8e159dd1164d-openshift-service-ca-bundle\") pod \"switch-graph-8e251-5896fcfc4c-jr8mf\" (UID: \"070fed0c-aefb-4bc3-8855-8e159dd1164d\") " pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:09.523031 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:09.522999 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls\") pod \"switch-graph-8e251-5896fcfc4c-jr8mf\" (UID: \"070fed0c-aefb-4bc3-8855-8e159dd1164d\") " pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:09.523223 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:51:09.523142 2560 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-8e251-serving-cert: secret "switch-graph-8e251-serving-cert" not found Apr 16 17:51:09.523223 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:51:09.523203 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls podName:070fed0c-aefb-4bc3-8855-8e159dd1164d nodeName:}" failed. No retries permitted until 2026-04-16 17:51:10.523188509 +0000 UTC m=+638.886397361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls") pod "switch-graph-8e251-5896fcfc4c-jr8mf" (UID: "070fed0c-aefb-4bc3-8855-8e159dd1164d") : secret "switch-graph-8e251-serving-cert" not found Apr 16 17:51:10.145022 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:10.144980 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:51:10.145437 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:10.144982 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:51:10.530800 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:10.530766 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls\") pod \"switch-graph-8e251-5896fcfc4c-jr8mf\" (UID: \"070fed0c-aefb-4bc3-8855-8e159dd1164d\") " pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:10.533109 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:10.533088 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls\") pod \"switch-graph-8e251-5896fcfc4c-jr8mf\" (UID: \"070fed0c-aefb-4bc3-8855-8e159dd1164d\") " pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:10.641102 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:10.641071 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:10.766364 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:10.766338 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf"] Apr 16 17:51:10.768399 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:51:10.768366 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod070fed0c_aefb_4bc3_8855_8e159dd1164d.slice/crio-a91ebe161bec4d7fae09405bbc6c75af21ca81bc8c8bd41b5cfb4d635a570e3b WatchSource:0}: Error finding container a91ebe161bec4d7fae09405bbc6c75af21ca81bc8c8bd41b5cfb4d635a570e3b: Status 404 returned error can't find the container with id a91ebe161bec4d7fae09405bbc6c75af21ca81bc8c8bd41b5cfb4d635a570e3b Apr 16 17:51:11.118718 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:11.118628 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:51:11.207827 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:11.207792 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" event={"ID":"070fed0c-aefb-4bc3-8855-8e159dd1164d","Type":"ContainerStarted","Data":"dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d"} Apr 16 17:51:11.207827 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:11.207827 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" event={"ID":"070fed0c-aefb-4bc3-8855-8e159dd1164d","Type":"ContainerStarted","Data":"a91ebe161bec4d7fae09405bbc6c75af21ca81bc8c8bd41b5cfb4d635a570e3b"} Apr 16 17:51:11.208337 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:11.207888 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:11.228550 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:11.228502 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" podStartSLOduration=3.228487542 podStartE2EDuration="3.228487542s" podCreationTimestamp="2026-04-16 17:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:51:11.226555568 +0000 UTC m=+639.589764441" watchObservedRunningTime="2026-04-16 17:51:11.228487542 +0000 UTC m=+639.591696416" Apr 16 17:51:16.118647 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:16.118607 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:51:17.220275 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:17.220248 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:51:18.501742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:18.501716 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:51:18.603753 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:18.603724 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5da22b6a-8c07-4695-80ec-445d2e8217c5-proxy-tls\") pod \"5da22b6a-8c07-4695-80ec-445d2e8217c5\" (UID: \"5da22b6a-8c07-4695-80ec-445d2e8217c5\") " Apr 16 17:51:18.603918 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:18.603784 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da22b6a-8c07-4695-80ec-445d2e8217c5-openshift-service-ca-bundle\") pod \"5da22b6a-8c07-4695-80ec-445d2e8217c5\" (UID: \"5da22b6a-8c07-4695-80ec-445d2e8217c5\") " Apr 16 17:51:18.604114 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:18.604087 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da22b6a-8c07-4695-80ec-445d2e8217c5-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "5da22b6a-8c07-4695-80ec-445d2e8217c5" (UID: "5da22b6a-8c07-4695-80ec-445d2e8217c5"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:51:18.605824 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:18.605801 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da22b6a-8c07-4695-80ec-445d2e8217c5-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "5da22b6a-8c07-4695-80ec-445d2e8217c5" (UID: "5da22b6a-8c07-4695-80ec-445d2e8217c5"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:51:18.705333 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:18.705257 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da22b6a-8c07-4695-80ec-445d2e8217c5-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:51:18.705333 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:18.705284 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5da22b6a-8c07-4695-80ec-445d2e8217c5-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:51:19.236933 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:19.236897 2560 generic.go:358] "Generic (PLEG): container finished" podID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerID="29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173" exitCode=0 Apr 16 17:51:19.237125 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:19.236965 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" Apr 16 17:51:19.237125 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:19.236964 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" event={"ID":"5da22b6a-8c07-4695-80ec-445d2e8217c5","Type":"ContainerDied","Data":"29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173"} Apr 16 17:51:19.237125 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:19.236999 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw" event={"ID":"5da22b6a-8c07-4695-80ec-445d2e8217c5","Type":"ContainerDied","Data":"f7b176395a54f68344cc5d69508297e2c85957c16de931fdd00672310e4ba459"} Apr 16 17:51:19.237125 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:19.237018 2560 scope.go:117] "RemoveContainer" containerID="29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173" Apr 16 17:51:19.246052 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:19.246033 2560 scope.go:117] "RemoveContainer" containerID="29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173" Apr 16 17:51:19.246342 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:51:19.246315 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173\": container with ID starting with 29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173 not found: ID does not exist" containerID="29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173" Apr 16 17:51:19.246425 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:19.246355 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173"} err="failed to get container status \"29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173\": rpc error: code = NotFound desc = could not find container \"29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173\": container with ID starting with 29ae348b89916a1484c9e91f3b98245113d29b4dc1505f2425f43f03789b0173 not found: ID does not exist" Apr 16 17:51:19.261705 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:19.261677 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw"] Apr 16 17:51:19.266545 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:19.266522 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-884b78d8f-rz4zw"] Apr 16 17:51:20.144201 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:20.144159 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:51:20.144585 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:20.144159 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:51:20.161286 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:20.161256 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" path="/var/lib/kubelet/pods/5da22b6a-8c07-4695-80ec-445d2e8217c5/volumes" Apr 16 17:51:30.144353 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:30.144300 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 17:51:30.144353 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:30.144300 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.33:8080: connect: connection refused" Apr 16 17:51:40.145030 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:40.144995 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" Apr 16 17:51:40.145481 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:40.145059 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" Apr 16 17:51:58.602623 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.602591 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk"] Apr 16 17:51:58.603093 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.603006 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerName="model-chainer" Apr 16 17:51:58.603093 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.603019 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerName="model-chainer" Apr 16 17:51:58.603093 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.603092 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="5da22b6a-8c07-4695-80ec-445d2e8217c5" containerName="model-chainer" Apr 16 17:51:58.606080 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.606064 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 17:51:58.608850 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.608814 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-f6449-serving-cert\"" Apr 16 17:51:58.608978 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.608819 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-f6449-kube-rbac-proxy-sar-config\"" Apr 16 17:51:58.618058 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.618035 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk"] Apr 16 17:51:58.754953 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.754919 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70430b09-ce04-415c-a040-3fab5b7e4673-openshift-service-ca-bundle\") pod \"sequence-graph-f6449-6ff9b5b46f-jl9sk\" (UID: \"70430b09-ce04-415c-a040-3fab5b7e4673\") " pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 17:51:58.755140 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.754971 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/70430b09-ce04-415c-a040-3fab5b7e4673-proxy-tls\") pod \"sequence-graph-f6449-6ff9b5b46f-jl9sk\" (UID: \"70430b09-ce04-415c-a040-3fab5b7e4673\") " pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 17:51:58.856050 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.855963 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70430b09-ce04-415c-a040-3fab5b7e4673-openshift-service-ca-bundle\") pod \"sequence-graph-f6449-6ff9b5b46f-jl9sk\" (UID: \"70430b09-ce04-415c-a040-3fab5b7e4673\") " pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 17:51:58.856050 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.856025 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/70430b09-ce04-415c-a040-3fab5b7e4673-proxy-tls\") pod \"sequence-graph-f6449-6ff9b5b46f-jl9sk\" (UID: \"70430b09-ce04-415c-a040-3fab5b7e4673\") " pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 17:51:58.856721 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.856701 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70430b09-ce04-415c-a040-3fab5b7e4673-openshift-service-ca-bundle\") pod \"sequence-graph-f6449-6ff9b5b46f-jl9sk\" (UID: \"70430b09-ce04-415c-a040-3fab5b7e4673\") " pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 17:51:58.858360 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.858341 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/70430b09-ce04-415c-a040-3fab5b7e4673-proxy-tls\") pod \"sequence-graph-f6449-6ff9b5b46f-jl9sk\" (UID: \"70430b09-ce04-415c-a040-3fab5b7e4673\") " pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 17:51:58.917109 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:58.917082 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 17:51:59.043478 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:59.043358 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk"] Apr 16 17:51:59.046262 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:51:59.046240 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70430b09_ce04_415c_a040_3fab5b7e4673.slice/crio-130a82b3e348c329417d30cee7ea62166cbb149886e03b010ba7a1e38cedd118 WatchSource:0}: Error finding container 130a82b3e348c329417d30cee7ea62166cbb149886e03b010ba7a1e38cedd118: Status 404 returned error can't find the container with id 130a82b3e348c329417d30cee7ea62166cbb149886e03b010ba7a1e38cedd118 Apr 16 17:51:59.358018 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:59.357980 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" event={"ID":"70430b09-ce04-415c-a040-3fab5b7e4673","Type":"ContainerStarted","Data":"72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad"} Apr 16 17:51:59.358018 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:59.358016 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" event={"ID":"70430b09-ce04-415c-a040-3fab5b7e4673","Type":"ContainerStarted","Data":"130a82b3e348c329417d30cee7ea62166cbb149886e03b010ba7a1e38cedd118"} Apr 16 17:51:59.358210 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:59.358097 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 17:51:59.377673 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:51:59.377588 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" podStartSLOduration=1.377576642 podStartE2EDuration="1.377576642s" podCreationTimestamp="2026-04-16 17:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:51:59.377035935 +0000 UTC m=+687.740244806" watchObservedRunningTime="2026-04-16 17:51:59.377576642 +0000 UTC m=+687.740785514" Apr 16 17:52:05.367195 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:52:05.367167 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 17:59:23.635195 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:23.635162 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf"] Apr 16 17:59:23.635671 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:23.635396 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerName="switch-graph-8e251" containerID="cri-o://dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d" gracePeriod=30 Apr 16 17:59:23.817562 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:23.817529 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4"] Apr 16 17:59:23.817799 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:23.817754 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerName="kserve-container" containerID="cri-o://40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4" gracePeriod=30 Apr 16 17:59:23.977763 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:23.977684 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b"] Apr 16 17:59:23.977998 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:23.977952 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerName="kserve-container" containerID="cri-o://b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9" gracePeriod=30 Apr 16 17:59:24.056068 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.056038 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762"] Apr 16 17:59:24.059820 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.059799 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" Apr 16 17:59:24.069566 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.069549 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" Apr 16 17:59:24.097399 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.097370 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762"] Apr 16 17:59:24.157861 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.157497 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5"] Apr 16 17:59:24.161908 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.161878 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" Apr 16 17:59:24.173783 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.173449 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" Apr 16 17:59:24.216229 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.216191 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762"] Apr 16 17:59:24.222191 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:59:24.222162 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fd58d8c_86f2_440a_bb41_f68159eed96b.slice/crio-7afa11cb4e7bb0a9dc17652f80d87d5b30a09ae53bb0078d1d4805556503de80 WatchSource:0}: Error finding container 7afa11cb4e7bb0a9dc17652f80d87d5b30a09ae53bb0078d1d4805556503de80: Status 404 returned error can't find the container with id 7afa11cb4e7bb0a9dc17652f80d87d5b30a09ae53bb0078d1d4805556503de80 Apr 16 17:59:24.224337 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.224319 2560 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 17:59:24.270975 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.270950 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5"] Apr 16 17:59:24.514236 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.514149 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5"] Apr 16 17:59:24.517774 ip-10-0-134-233 kubenswrapper[2560]: W0416 17:59:24.517741 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13a91386_926e_494b_8b84_b5f1685c0ef7.slice/crio-1ce9d76d10ae98548a41a6408d7fcae5966368c7c5871a784f9f74fb3eae8375 WatchSource:0}: Error finding container 1ce9d76d10ae98548a41a6408d7fcae5966368c7c5871a784f9f74fb3eae8375: Status 404 returned error can't find the container with id 1ce9d76d10ae98548a41a6408d7fcae5966368c7c5871a784f9f74fb3eae8375 Apr 16 17:59:24.805169 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.805136 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" event={"ID":"13a91386-926e-494b-8b84-b5f1685c0ef7","Type":"ContainerStarted","Data":"0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3"} Apr 16 17:59:24.805169 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.805173 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" event={"ID":"13a91386-926e-494b-8b84-b5f1685c0ef7","Type":"ContainerStarted","Data":"1ce9d76d10ae98548a41a6408d7fcae5966368c7c5871a784f9f74fb3eae8375"} Apr 16 17:59:24.805672 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.805328 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" Apr 16 17:59:24.806504 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.806481 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" event={"ID":"8fd58d8c-86f2-440a-bb41-f68159eed96b","Type":"ContainerStarted","Data":"2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd"} Apr 16 17:59:24.806504 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.806506 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" event={"ID":"8fd58d8c-86f2-440a-bb41-f68159eed96b","Type":"ContainerStarted","Data":"7afa11cb4e7bb0a9dc17652f80d87d5b30a09ae53bb0078d1d4805556503de80"} Apr 16 17:59:24.806690 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.806662 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 16 17:59:24.806742 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.806688 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" Apr 16 17:59:24.807870 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.807812 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:59:24.837290 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.837234 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" podStartSLOduration=0.837216681 podStartE2EDuration="837.216681ms" podCreationTimestamp="2026-04-16 17:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:59:24.836744949 +0000 UTC m=+1133.199953823" watchObservedRunningTime="2026-04-16 17:59:24.837216681 +0000 UTC m=+1133.200425556" Apr 16 17:59:24.873947 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:24.873906 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" podStartSLOduration=0.873891994 podStartE2EDuration="873.891994ms" podCreationTimestamp="2026-04-16 17:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:59:24.873120024 +0000 UTC m=+1133.236328899" watchObservedRunningTime="2026-04-16 17:59:24.873891994 +0000 UTC m=+1133.237100867" Apr 16 17:59:25.816125 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:25.816077 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:59:25.816573 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:25.816473 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 16 17:59:27.089376 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.089356 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" Apr 16 17:59:27.138260 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.138240 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" Apr 16 17:59:27.218879 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.218784 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerName="switch-graph-8e251" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:59:27.823460 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.823422 2560 generic.go:358] "Generic (PLEG): container finished" podID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerID="40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4" exitCode=0 Apr 16 17:59:27.823610 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.823484 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" Apr 16 17:59:27.823610 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.823503 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" event={"ID":"52d14f4e-61a3-47b0-a0b7-ec356e3847f7","Type":"ContainerDied","Data":"40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4"} Apr 16 17:59:27.823610 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.823538 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4" event={"ID":"52d14f4e-61a3-47b0-a0b7-ec356e3847f7","Type":"ContainerDied","Data":"fda824ee328cf721f6a6027a8adf4463367dde0a978c868f535a865a3fceb139"} Apr 16 17:59:27.823610 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.823557 2560 scope.go:117] "RemoveContainer" containerID="40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4" Apr 16 17:59:27.824650 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.824629 2560 generic.go:358] "Generic (PLEG): container finished" podID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerID="b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9" exitCode=0 Apr 16 17:59:27.824730 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.824686 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" Apr 16 17:59:27.824730 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.824698 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" event={"ID":"b1fc29ed-9974-4a56-a378-2f8018da4ee1","Type":"ContainerDied","Data":"b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9"} Apr 16 17:59:27.824730 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.824720 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b" event={"ID":"b1fc29ed-9974-4a56-a378-2f8018da4ee1","Type":"ContainerDied","Data":"24af753457977b49e10a30e2f8e079d3095518f7f36331851c5772c56d31e1fa"} Apr 16 17:59:27.831208 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.831190 2560 scope.go:117] "RemoveContainer" containerID="40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4" Apr 16 17:59:27.831457 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:59:27.831433 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4\": container with ID starting with 40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4 not found: ID does not exist" containerID="40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4" Apr 16 17:59:27.831533 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.831462 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4"} err="failed to get container status \"40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4\": rpc error: code = NotFound desc = could not find container \"40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4\": container with ID starting with 40ec8a17ceac5aa666bea54c4af0ee133e654f759762371d0f8fbc86276ff1a4 not found: ID does not exist" Apr 16 17:59:27.831533 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.831478 2560 scope.go:117] "RemoveContainer" containerID="b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9" Apr 16 17:59:27.839291 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.839276 2560 scope.go:117] "RemoveContainer" containerID="b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9" Apr 16 17:59:27.839496 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:59:27.839479 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9\": container with ID starting with b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9 not found: ID does not exist" containerID="b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9" Apr 16 17:59:27.839544 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.839503 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9"} err="failed to get container status \"b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9\": rpc error: code = NotFound desc = could not find container \"b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9\": container with ID starting with b32b37290fa7f481a6164acacfad3c40defac49ca99a8dd9b8fdc727d3c606c9 not found: ID does not exist" Apr 16 17:59:27.855265 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.855239 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4"] Apr 16 17:59:27.862593 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.862575 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8e251-predictor-59cb876bdd-dx2p4"] Apr 16 17:59:27.883191 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.883165 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b"] Apr 16 17:59:27.895617 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:27.895594 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8e251-predictor-64c47d4b4f-xj88b"] Apr 16 17:59:28.162754 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:28.162673 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" path="/var/lib/kubelet/pods/52d14f4e-61a3-47b0-a0b7-ec356e3847f7/volumes" Apr 16 17:59:28.163631 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:28.163610 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" path="/var/lib/kubelet/pods/b1fc29ed-9974-4a56-a378-2f8018da4ee1/volumes" Apr 16 17:59:32.218365 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:32.218325 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerName="switch-graph-8e251" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:59:35.816035 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:35.815982 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:59:35.816456 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:35.816181 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 16 17:59:37.219230 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:37.219195 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerName="switch-graph-8e251" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:59:37.219675 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:37.219304 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:59:42.219032 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:42.218984 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerName="switch-graph-8e251" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:59:45.817088 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:45.817044 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 16 17:59:45.817536 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:45.817044 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:59:47.219399 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:47.219357 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerName="switch-graph-8e251" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:59:52.219522 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:52.219442 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerName="switch-graph-8e251" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:59:53.780311 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.780289 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:59:53.845933 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.845903 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070fed0c-aefb-4bc3-8855-8e159dd1164d-openshift-service-ca-bundle\") pod \"070fed0c-aefb-4bc3-8855-8e159dd1164d\" (UID: \"070fed0c-aefb-4bc3-8855-8e159dd1164d\") " Apr 16 17:59:53.846076 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.845941 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls\") pod \"070fed0c-aefb-4bc3-8855-8e159dd1164d\" (UID: \"070fed0c-aefb-4bc3-8855-8e159dd1164d\") " Apr 16 17:59:53.846254 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.846216 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070fed0c-aefb-4bc3-8855-8e159dd1164d-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "070fed0c-aefb-4bc3-8855-8e159dd1164d" (UID: "070fed0c-aefb-4bc3-8855-8e159dd1164d"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:59:53.847897 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.847875 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "070fed0c-aefb-4bc3-8855-8e159dd1164d" (UID: "070fed0c-aefb-4bc3-8855-8e159dd1164d"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:59:53.916403 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.916335 2560 generic.go:358] "Generic (PLEG): container finished" podID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerID="dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d" exitCode=0 Apr 16 17:59:53.916403 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.916386 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" event={"ID":"070fed0c-aefb-4bc3-8855-8e159dd1164d","Type":"ContainerDied","Data":"dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d"} Apr 16 17:59:53.916403 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.916394 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" Apr 16 17:59:53.916634 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.916415 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf" event={"ID":"070fed0c-aefb-4bc3-8855-8e159dd1164d","Type":"ContainerDied","Data":"a91ebe161bec4d7fae09405bbc6c75af21ca81bc8c8bd41b5cfb4d635a570e3b"} Apr 16 17:59:53.916634 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.916436 2560 scope.go:117] "RemoveContainer" containerID="dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d" Apr 16 17:59:53.924227 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.924205 2560 scope.go:117] "RemoveContainer" containerID="dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d" Apr 16 17:59:53.924477 ip-10-0-134-233 kubenswrapper[2560]: E0416 17:59:53.924459 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d\": container with ID starting with dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d not found: ID does not exist" containerID="dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d" Apr 16 17:59:53.924529 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.924491 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d"} err="failed to get container status \"dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d\": rpc error: code = NotFound desc = could not find container \"dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d\": container with ID starting with dcdb2ae7ac1bf319e90280ac80021052b6ddf8fd8af61fff133263bbe9af625d not found: ID does not exist" Apr 16 17:59:53.941575 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.941546 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf"] Apr 16 17:59:53.946953 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.946935 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070fed0c-aefb-4bc3-8855-8e159dd1164d-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:59:53.947048 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.946956 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/070fed0c-aefb-4bc3-8855-8e159dd1164d-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 17:59:53.949045 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:53.949026 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-8e251-5896fcfc4c-jr8mf"] Apr 16 17:59:54.162169 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:54.162142 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" path="/var/lib/kubelet/pods/070fed0c-aefb-4bc3-8855-8e159dd1164d/volumes" Apr 16 17:59:55.816700 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:55.816653 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 17:59:55.817186 ip-10-0-134-233 kubenswrapper[2560]: I0416 17:59:55.816650 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 16 18:00:05.817097 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:05.817044 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 18:00:05.817568 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:05.817048 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 16 18:00:13.258610 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.258577 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk"] Apr 16 18:00:13.259119 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.258856 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" containerName="sequence-graph-f6449" containerID="cri-o://72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad" gracePeriod=30 Apr 16 18:00:13.404293 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.404259 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv"] Apr 16 18:00:13.404545 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.404494 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerName="kserve-container" containerID="cri-o://4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274" gracePeriod=30 Apr 16 18:00:13.452769 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.452742 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll"] Apr 16 18:00:13.453132 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.453115 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerName="kserve-container" Apr 16 18:00:13.453207 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.453133 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerName="kserve-container" Apr 16 18:00:13.453207 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.453147 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerName="kserve-container" Apr 16 18:00:13.453207 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.453153 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerName="kserve-container" Apr 16 18:00:13.453207 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.453166 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerName="switch-graph-8e251" Apr 16 18:00:13.453207 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.453172 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerName="switch-graph-8e251" Apr 16 18:00:13.453418 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.453239 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="52d14f4e-61a3-47b0-a0b7-ec356e3847f7" containerName="kserve-container" Apr 16 18:00:13.453418 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.453250 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="b1fc29ed-9974-4a56-a378-2f8018da4ee1" containerName="kserve-container" Apr 16 18:00:13.453418 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.453258 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="070fed0c-aefb-4bc3-8855-8e159dd1164d" containerName="switch-graph-8e251" Apr 16 18:00:13.456327 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.456309 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" Apr 16 18:00:13.465646 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.465626 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" Apr 16 18:00:13.473045 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.473021 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll"] Apr 16 18:00:13.505854 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.505808 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg"] Apr 16 18:00:13.506139 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.506111 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerName="kserve-container" containerID="cri-o://e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714" gracePeriod=30 Apr 16 18:00:13.566472 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.566438 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq"] Apr 16 18:00:13.571326 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.571303 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" Apr 16 18:00:13.581776 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.581748 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq"] Apr 16 18:00:13.583702 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.583684 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" Apr 16 18:00:13.618785 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.618762 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll"] Apr 16 18:00:13.621612 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:00:13.621549 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod832af96b_ab13_4a5e_a183_893d2434cdc1.slice/crio-027e03f3238a76f0f9ff637e97d5be8f6f41278f4671baaae3fc5311cdbdf049 WatchSource:0}: Error finding container 027e03f3238a76f0f9ff637e97d5be8f6f41278f4671baaae3fc5311cdbdf049: Status 404 returned error can't find the container with id 027e03f3238a76f0f9ff637e97d5be8f6f41278f4671baaae3fc5311cdbdf049 Apr 16 18:00:13.712714 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.712668 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq"] Apr 16 18:00:13.716493 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:00:13.716463 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb206248_a3bd_489f_802e_7e9f42dc6dec.slice/crio-07be189a36531229014a9612a124d5d43415ba39ad355bb3055de7414880d398 WatchSource:0}: Error finding container 07be189a36531229014a9612a124d5d43415ba39ad355bb3055de7414880d398: Status 404 returned error can't find the container with id 07be189a36531229014a9612a124d5d43415ba39ad355bb3055de7414880d398 Apr 16 18:00:13.978764 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.978731 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" event={"ID":"cb206248-a3bd-489f-802e-7e9f42dc6dec","Type":"ContainerStarted","Data":"df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b"} Apr 16 18:00:13.978974 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.978771 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" event={"ID":"cb206248-a3bd-489f-802e-7e9f42dc6dec","Type":"ContainerStarted","Data":"07be189a36531229014a9612a124d5d43415ba39ad355bb3055de7414880d398"} Apr 16 18:00:13.979056 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.978969 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" Apr 16 18:00:13.980261 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.980241 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" event={"ID":"832af96b-ab13-4a5e-a183-893d2434cdc1","Type":"ContainerStarted","Data":"80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4"} Apr 16 18:00:13.980361 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.980266 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" event={"ID":"832af96b-ab13-4a5e-a183-893d2434cdc1","Type":"ContainerStarted","Data":"027e03f3238a76f0f9ff637e97d5be8f6f41278f4671baaae3fc5311cdbdf049"} Apr 16 18:00:13.980361 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.980242 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 18:00:13.980457 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.980445 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" Apr 16 18:00:13.981506 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.981486 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 18:00:13.999814 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:13.999764 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" podStartSLOduration=0.999730177 podStartE2EDuration="999.730177ms" podCreationTimestamp="2026-04-16 18:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:00:13.999323125 +0000 UTC m=+1182.362531992" watchObservedRunningTime="2026-04-16 18:00:13.999730177 +0000 UTC m=+1182.362939051" Apr 16 18:00:14.019968 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:14.019925 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" podStartSLOduration=1.019911935 podStartE2EDuration="1.019911935s" podCreationTimestamp="2026-04-16 18:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:00:14.018027126 +0000 UTC m=+1182.381235998" watchObservedRunningTime="2026-04-16 18:00:14.019911935 +0000 UTC m=+1182.383120806" Apr 16 18:00:14.984075 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:14.984031 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 18:00:14.984522 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:14.984035 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 18:00:15.365577 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:15.365536 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" containerName="sequence-graph-f6449" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:00:15.817316 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:15.817282 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" Apr 16 18:00:15.817790 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:15.817770 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" Apr 16 18:00:16.778449 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.778427 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" Apr 16 18:00:16.781417 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.781398 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" Apr 16 18:00:16.990699 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.990660 2560 generic.go:358] "Generic (PLEG): container finished" podID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerID="e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714" exitCode=0 Apr 16 18:00:16.990857 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.990725 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" Apr 16 18:00:16.990857 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.990749 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" event={"ID":"098054f0-d166-4d20-89b1-6e7fefa70d0a","Type":"ContainerDied","Data":"e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714"} Apr 16 18:00:16.990857 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.990787 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg" event={"ID":"098054f0-d166-4d20-89b1-6e7fefa70d0a","Type":"ContainerDied","Data":"4599fcb653a2c53840d4775ffeb20fa82927828fe8f50e07c68a20276f1934da"} Apr 16 18:00:16.990857 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.990802 2560 scope.go:117] "RemoveContainer" containerID="e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714" Apr 16 18:00:16.991937 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.991911 2560 generic.go:358] "Generic (PLEG): container finished" podID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerID="4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274" exitCode=0 Apr 16 18:00:16.992041 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.991946 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" event={"ID":"b9295a82-3f8c-40a9-aacc-33edd8f703e3","Type":"ContainerDied","Data":"4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274"} Apr 16 18:00:16.992041 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.991969 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" event={"ID":"b9295a82-3f8c-40a9-aacc-33edd8f703e3","Type":"ContainerDied","Data":"ec9b8e8203942708886c1c08f1b82b1d9f0ae2ab3f2fe2ca6d6c28487231d5c1"} Apr 16 18:00:16.992041 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.991974 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv" Apr 16 18:00:16.999351 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.999265 2560 scope.go:117] "RemoveContainer" containerID="e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714" Apr 16 18:00:16.999533 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:00:16.999510 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714\": container with ID starting with e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714 not found: ID does not exist" containerID="e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714" Apr 16 18:00:16.999615 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.999539 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714"} err="failed to get container status \"e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714\": rpc error: code = NotFound desc = could not find container \"e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714\": container with ID starting with e98b2abb0d2ad34f429a645146ef50170f26acc1a117b6bee0bbe64a9b5b2714 not found: ID does not exist" Apr 16 18:00:16.999615 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:16.999560 2560 scope.go:117] "RemoveContainer" containerID="4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274" Apr 16 18:00:17.006579 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:17.006565 2560 scope.go:117] "RemoveContainer" containerID="4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274" Apr 16 18:00:17.006795 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:00:17.006775 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274\": container with ID starting with 4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274 not found: ID does not exist" containerID="4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274" Apr 16 18:00:17.006893 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:17.006806 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274"} err="failed to get container status \"4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274\": rpc error: code = NotFound desc = could not find container \"4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274\": container with ID starting with 4977539e1e0c8c6220e7220d6c60bb5360947bed7b7f31657e74993f3cfb7274 not found: ID does not exist" Apr 16 18:00:17.023273 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:17.023204 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv"] Apr 16 18:00:17.033946 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:17.033925 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f6449-predictor-67b49d9f68-jvnsv"] Apr 16 18:00:17.053251 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:17.053232 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg"] Apr 16 18:00:17.065740 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:17.065710 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f6449-predictor-5db958dfd5-jktbg"] Apr 16 18:00:18.162066 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:18.162034 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" path="/var/lib/kubelet/pods/098054f0-d166-4d20-89b1-6e7fefa70d0a/volumes" Apr 16 18:00:18.162423 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:18.162268 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" path="/var/lib/kubelet/pods/b9295a82-3f8c-40a9-aacc-33edd8f703e3/volumes" Apr 16 18:00:20.365425 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:20.365384 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" containerName="sequence-graph-f6449" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:00:24.984176 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:24.984135 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 18:00:24.984571 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:24.984135 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 18:00:25.365350 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:25.365311 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" containerName="sequence-graph-f6449" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:00:25.365547 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:25.365442 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 18:00:30.365810 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:30.365772 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" containerName="sequence-graph-f6449" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:00:33.964294 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.964206 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl"] Apr 16 18:00:33.964660 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.964555 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerName="kserve-container" Apr 16 18:00:33.964660 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.964567 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerName="kserve-container" Apr 16 18:00:33.964660 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.964581 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerName="kserve-container" Apr 16 18:00:33.964660 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.964587 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerName="kserve-container" Apr 16 18:00:33.964660 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.964641 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="098054f0-d166-4d20-89b1-6e7fefa70d0a" containerName="kserve-container" Apr 16 18:00:33.964660 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.964653 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="b9295a82-3f8c-40a9-aacc-33edd8f703e3" containerName="kserve-container" Apr 16 18:00:33.969185 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.969163 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:33.971297 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.971273 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-8493d-kube-rbac-proxy-sar-config\"" Apr 16 18:00:33.971297 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.971290 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-8493d-serving-cert\"" Apr 16 18:00:33.978052 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:33.978032 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl"] Apr 16 18:00:34.083565 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:34.083526 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fae30821-c57c-42d5-b7cd-9d3f302091bc-proxy-tls\") pod \"ensemble-graph-8493d-5fc7d65d87-6gpbl\" (UID: \"fae30821-c57c-42d5-b7cd-9d3f302091bc\") " pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:34.083750 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:34.083576 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae30821-c57c-42d5-b7cd-9d3f302091bc-openshift-service-ca-bundle\") pod \"ensemble-graph-8493d-5fc7d65d87-6gpbl\" (UID: \"fae30821-c57c-42d5-b7cd-9d3f302091bc\") " pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:34.184638 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:34.184595 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fae30821-c57c-42d5-b7cd-9d3f302091bc-proxy-tls\") pod \"ensemble-graph-8493d-5fc7d65d87-6gpbl\" (UID: \"fae30821-c57c-42d5-b7cd-9d3f302091bc\") " pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:34.184823 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:34.184652 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae30821-c57c-42d5-b7cd-9d3f302091bc-openshift-service-ca-bundle\") pod \"ensemble-graph-8493d-5fc7d65d87-6gpbl\" (UID: \"fae30821-c57c-42d5-b7cd-9d3f302091bc\") " pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:34.185332 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:34.185302 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae30821-c57c-42d5-b7cd-9d3f302091bc-openshift-service-ca-bundle\") pod \"ensemble-graph-8493d-5fc7d65d87-6gpbl\" (UID: \"fae30821-c57c-42d5-b7cd-9d3f302091bc\") " pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:34.186930 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:34.186903 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fae30821-c57c-42d5-b7cd-9d3f302091bc-proxy-tls\") pod \"ensemble-graph-8493d-5fc7d65d87-6gpbl\" (UID: \"fae30821-c57c-42d5-b7cd-9d3f302091bc\") " pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:34.280306 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:34.280277 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:34.405995 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:34.405967 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl"] Apr 16 18:00:34.408510 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:00:34.408481 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfae30821_c57c_42d5_b7cd_9d3f302091bc.slice/crio-c5ca20453d2a313dabc5d47d726c4e91f2ee97c9d58ef91a3f25dfa50d831351 WatchSource:0}: Error finding container c5ca20453d2a313dabc5d47d726c4e91f2ee97c9d58ef91a3f25dfa50d831351: Status 404 returned error can't find the container with id c5ca20453d2a313dabc5d47d726c4e91f2ee97c9d58ef91a3f25dfa50d831351 Apr 16 18:00:34.984812 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:34.984771 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 18:00:34.985241 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:34.984771 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 18:00:35.053425 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:35.053381 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" event={"ID":"fae30821-c57c-42d5-b7cd-9d3f302091bc","Type":"ContainerStarted","Data":"12e7a651fdc4fb9e6485107e7fe64d4d8fab63e856edee16ffac07db86752a00"} Apr 16 18:00:35.053425 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:35.053421 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" event={"ID":"fae30821-c57c-42d5-b7cd-9d3f302091bc","Type":"ContainerStarted","Data":"c5ca20453d2a313dabc5d47d726c4e91f2ee97c9d58ef91a3f25dfa50d831351"} Apr 16 18:00:35.053636 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:35.053509 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:35.082585 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:35.082539 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" podStartSLOduration=2.082526855 podStartE2EDuration="2.082526855s" podCreationTimestamp="2026-04-16 18:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:00:35.081482811 +0000 UTC m=+1203.444691682" watchObservedRunningTime="2026-04-16 18:00:35.082526855 +0000 UTC m=+1203.445735777" Apr 16 18:00:35.365213 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:35.365180 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" containerName="sequence-graph-f6449" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:00:40.365754 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:40.365714 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" containerName="sequence-graph-f6449" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:00:41.062816 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:41.062786 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:43.407711 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:43.407690 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 18:00:43.467083 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:43.467047 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70430b09-ce04-415c-a040-3fab5b7e4673-openshift-service-ca-bundle\") pod \"70430b09-ce04-415c-a040-3fab5b7e4673\" (UID: \"70430b09-ce04-415c-a040-3fab5b7e4673\") " Apr 16 18:00:43.467267 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:43.467131 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/70430b09-ce04-415c-a040-3fab5b7e4673-proxy-tls\") pod \"70430b09-ce04-415c-a040-3fab5b7e4673\" (UID: \"70430b09-ce04-415c-a040-3fab5b7e4673\") " Apr 16 18:00:43.467875 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:43.467467 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70430b09-ce04-415c-a040-3fab5b7e4673-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "70430b09-ce04-415c-a040-3fab5b7e4673" (UID: "70430b09-ce04-415c-a040-3fab5b7e4673"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:00:43.469592 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:43.469559 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70430b09-ce04-415c-a040-3fab5b7e4673-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "70430b09-ce04-415c-a040-3fab5b7e4673" (UID: "70430b09-ce04-415c-a040-3fab5b7e4673"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:00:43.568198 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:43.568166 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70430b09-ce04-415c-a040-3fab5b7e4673-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:00:43.568198 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:43.568196 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/70430b09-ce04-415c-a040-3fab5b7e4673-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:00:44.051189 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.051156 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl"] Apr 16 18:00:44.051495 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.051444 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerName="ensemble-graph-8493d" containerID="cri-o://12e7a651fdc4fb9e6485107e7fe64d4d8fab63e856edee16ffac07db86752a00" gracePeriod=30 Apr 16 18:00:44.084997 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.084968 2560 generic.go:358] "Generic (PLEG): container finished" podID="70430b09-ce04-415c-a040-3fab5b7e4673" containerID="72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad" exitCode=0 Apr 16 18:00:44.085130 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.085046 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" event={"ID":"70430b09-ce04-415c-a040-3fab5b7e4673","Type":"ContainerDied","Data":"72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad"} Apr 16 18:00:44.085130 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.085059 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" Apr 16 18:00:44.085130 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.085083 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk" event={"ID":"70430b09-ce04-415c-a040-3fab5b7e4673","Type":"ContainerDied","Data":"130a82b3e348c329417d30cee7ea62166cbb149886e03b010ba7a1e38cedd118"} Apr 16 18:00:44.085130 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.085109 2560 scope.go:117] "RemoveContainer" containerID="72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad" Apr 16 18:00:44.093465 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.093438 2560 scope.go:117] "RemoveContainer" containerID="72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad" Apr 16 18:00:44.093744 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:00:44.093719 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad\": container with ID starting with 72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad not found: ID does not exist" containerID="72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad" Apr 16 18:00:44.093818 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.093752 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad"} err="failed to get container status \"72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad\": rpc error: code = NotFound desc = could not find container \"72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad\": container with ID starting with 72f888f5a69f9ac9db6a46ec63987c8f3ec1111cfbaef1ecafb88740c7e58fad not found: ID does not exist" Apr 16 18:00:44.113179 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.113157 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk"] Apr 16 18:00:44.130540 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.130516 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-f6449-6ff9b5b46f-jl9sk"] Apr 16 18:00:44.163455 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.163425 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" path="/var/lib/kubelet/pods/70430b09-ce04-415c-a040-3fab5b7e4673/volumes" Apr 16 18:00:44.201663 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.201635 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762"] Apr 16 18:00:44.201876 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.201856 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" containerID="cri-o://2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd" gracePeriod=30 Apr 16 18:00:44.256474 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.256445 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l"] Apr 16 18:00:44.256902 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.256887 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" containerName="sequence-graph-f6449" Apr 16 18:00:44.256954 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.256903 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" containerName="sequence-graph-f6449" Apr 16 18:00:44.256990 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.256986 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="70430b09-ce04-415c-a040-3fab5b7e4673" containerName="sequence-graph-f6449" Apr 16 18:00:44.261233 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.261207 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" Apr 16 18:00:44.271370 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.271354 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" Apr 16 18:00:44.281143 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.281120 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l"] Apr 16 18:00:44.328279 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.328247 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5"] Apr 16 18:00:44.328599 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.328548 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" containerID="cri-o://0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3" gracePeriod=30 Apr 16 18:00:44.411498 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.411465 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8"] Apr 16 18:00:44.416000 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.415975 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" Apr 16 18:00:44.423418 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.423387 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l"] Apr 16 18:00:44.427252 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.427209 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8"] Apr 16 18:00:44.427440 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.427421 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" Apr 16 18:00:44.427505 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:00:44.427460 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc85f1a3e_ffb7_4774_9193_1518e14433dc.slice/crio-6616707ba632cdec0c9273625db5b025341d5ce823d91fa36cf1fd29d911d2fb WatchSource:0}: Error finding container 6616707ba632cdec0c9273625db5b025341d5ce823d91fa36cf1fd29d911d2fb: Status 404 returned error can't find the container with id 6616707ba632cdec0c9273625db5b025341d5ce823d91fa36cf1fd29d911d2fb Apr 16 18:00:44.569514 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.569491 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8"] Apr 16 18:00:44.571714 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:00:44.571682 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6854379_2a41_4136_8e6f_2efba510f34f.slice/crio-7168e2c1b512fb8a81412d57638ddeadd998c94dc357cc8ec9677d90ad99ecdc WatchSource:0}: Error finding container 7168e2c1b512fb8a81412d57638ddeadd998c94dc357cc8ec9677d90ad99ecdc: Status 404 returned error can't find the container with id 7168e2c1b512fb8a81412d57638ddeadd998c94dc357cc8ec9677d90ad99ecdc Apr 16 18:00:44.984199 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.984099 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 18:00:44.984366 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:44.984099 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 18:00:45.091009 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.090972 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" event={"ID":"c85f1a3e-ffb7-4774-9193-1518e14433dc","Type":"ContainerStarted","Data":"2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa"} Apr 16 18:00:45.091009 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.091014 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" event={"ID":"c85f1a3e-ffb7-4774-9193-1518e14433dc","Type":"ContainerStarted","Data":"6616707ba632cdec0c9273625db5b025341d5ce823d91fa36cf1fd29d911d2fb"} Apr 16 18:00:45.091281 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.091257 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" Apr 16 18:00:45.092433 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.092402 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 18:00:45.092541 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.092505 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" event={"ID":"a6854379-2a41-4136-8e6f-2efba510f34f","Type":"ContainerStarted","Data":"ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9"} Apr 16 18:00:45.092541 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.092531 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" event={"ID":"a6854379-2a41-4136-8e6f-2efba510f34f","Type":"ContainerStarted","Data":"7168e2c1b512fb8a81412d57638ddeadd998c94dc357cc8ec9677d90ad99ecdc"} Apr 16 18:00:45.092759 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.092744 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" Apr 16 18:00:45.093641 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.093616 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 18:00:45.116862 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.113508 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" podStartSLOduration=1.113490361 podStartE2EDuration="1.113490361s" podCreationTimestamp="2026-04-16 18:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:00:45.112805318 +0000 UTC m=+1213.476014193" watchObservedRunningTime="2026-04-16 18:00:45.113490361 +0000 UTC m=+1213.476699248" Apr 16 18:00:45.131911 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.131870 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" podStartSLOduration=1.131857183 podStartE2EDuration="1.131857183s" podCreationTimestamp="2026-04-16 18:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:00:45.129867971 +0000 UTC m=+1213.493076844" watchObservedRunningTime="2026-04-16 18:00:45.131857183 +0000 UTC m=+1213.495066050" Apr 16 18:00:45.817036 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.816992 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 16 18:00:45.817417 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:45.816990 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 18:00:46.060439 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:46.060403 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerName="ensemble-graph-8493d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:00:46.096328 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:46.096236 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 18:00:46.096328 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:46.096295 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 18:00:47.580954 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:47.580927 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" Apr 16 18:00:48.105496 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:48.105461 2560 generic.go:358] "Generic (PLEG): container finished" podID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerID="0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3" exitCode=0 Apr 16 18:00:48.105687 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:48.105523 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" Apr 16 18:00:48.105687 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:48.105547 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" event={"ID":"13a91386-926e-494b-8b84-b5f1685c0ef7","Type":"ContainerDied","Data":"0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3"} Apr 16 18:00:48.105687 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:48.105591 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5" event={"ID":"13a91386-926e-494b-8b84-b5f1685c0ef7","Type":"ContainerDied","Data":"1ce9d76d10ae98548a41a6408d7fcae5966368c7c5871a784f9f74fb3eae8375"} Apr 16 18:00:48.105687 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:48.105611 2560 scope.go:117] "RemoveContainer" containerID="0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3" Apr 16 18:00:48.120359 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:48.120335 2560 scope.go:117] "RemoveContainer" containerID="0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3" Apr 16 18:00:48.120688 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:00:48.120664 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3\": container with ID starting with 0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3 not found: ID does not exist" containerID="0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3" Apr 16 18:00:48.120783 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:48.120697 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3"} err="failed to get container status \"0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3\": rpc error: code = NotFound desc = could not find container \"0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3\": container with ID starting with 0551083636c03a7d3511d979ddecba7fd5469c251b815c4f49be1fae42ce1fe3 not found: ID does not exist" Apr 16 18:00:48.163686 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:48.163627 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5"] Apr 16 18:00:48.169338 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:48.169315 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-8493d-predictor-788f857f9f-wlsz5"] Apr 16 18:00:48.540323 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:48.540300 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" Apr 16 18:00:49.111278 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:49.111241 2560 generic.go:358] "Generic (PLEG): container finished" podID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerID="2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd" exitCode=0 Apr 16 18:00:49.111742 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:49.111313 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" Apr 16 18:00:49.111742 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:49.111324 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" event={"ID":"8fd58d8c-86f2-440a-bb41-f68159eed96b","Type":"ContainerDied","Data":"2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd"} Apr 16 18:00:49.111742 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:49.111366 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762" event={"ID":"8fd58d8c-86f2-440a-bb41-f68159eed96b","Type":"ContainerDied","Data":"7afa11cb4e7bb0a9dc17652f80d87d5b30a09ae53bb0078d1d4805556503de80"} Apr 16 18:00:49.111742 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:49.111387 2560 scope.go:117] "RemoveContainer" containerID="2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd" Apr 16 18:00:49.121060 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:49.121041 2560 scope.go:117] "RemoveContainer" containerID="2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd" Apr 16 18:00:49.121332 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:00:49.121307 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd\": container with ID starting with 2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd not found: ID does not exist" containerID="2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd" Apr 16 18:00:49.121382 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:49.121334 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd"} err="failed to get container status \"2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd\": rpc error: code = NotFound desc = could not find container \"2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd\": container with ID starting with 2d40130c557ae82db18eb6efebabb9facc3be242941bb9c8ceb20496409837dd not found: ID does not exist" Apr 16 18:00:49.140349 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:49.140323 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762"] Apr 16 18:00:49.144690 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:49.144668 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-8493d-predictor-77b4959486-7d762"] Apr 16 18:00:50.162479 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:50.162443 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" path="/var/lib/kubelet/pods/13a91386-926e-494b-8b84-b5f1685c0ef7/volumes" Apr 16 18:00:50.162945 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:50.162719 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" path="/var/lib/kubelet/pods/8fd58d8c-86f2-440a-bb41-f68159eed96b/volumes" Apr 16 18:00:51.060544 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:51.060510 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerName="ensemble-graph-8493d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:00:54.985103 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:54.985056 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 18:00:54.985513 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:54.985056 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 18:00:56.061328 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:56.061289 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerName="ensemble-graph-8493d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:00:56.061727 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:56.061412 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:00:56.096325 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:56.096287 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 18:00:56.096492 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:00:56.096286 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 18:01:01.060967 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:01.060924 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerName="ensemble-graph-8493d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:01:04.985217 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:04.985183 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" Apr 16 18:01:04.985758 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:04.985631 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" Apr 16 18:01:06.061077 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:06.061039 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerName="ensemble-graph-8493d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:01:06.096806 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:06.096772 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 18:01:06.096968 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:06.096772 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 18:01:11.060327 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:11.060292 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerName="ensemble-graph-8493d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:01:14.193163 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.193132 2560 generic.go:358] "Generic (PLEG): container finished" podID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerID="12e7a651fdc4fb9e6485107e7fe64d4d8fab63e856edee16ffac07db86752a00" exitCode=0 Apr 16 18:01:14.193461 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.193200 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" event={"ID":"fae30821-c57c-42d5-b7cd-9d3f302091bc","Type":"ContainerDied","Data":"12e7a651fdc4fb9e6485107e7fe64d4d8fab63e856edee16ffac07db86752a00"} Apr 16 18:01:14.193461 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.193238 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" event={"ID":"fae30821-c57c-42d5-b7cd-9d3f302091bc","Type":"ContainerDied","Data":"c5ca20453d2a313dabc5d47d726c4e91f2ee97c9d58ef91a3f25dfa50d831351"} Apr 16 18:01:14.193461 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.193248 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ca20453d2a313dabc5d47d726c4e91f2ee97c9d58ef91a3f25dfa50d831351" Apr 16 18:01:14.199747 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.199731 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:01:14.231707 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.231687 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fae30821-c57c-42d5-b7cd-9d3f302091bc-proxy-tls\") pod \"fae30821-c57c-42d5-b7cd-9d3f302091bc\" (UID: \"fae30821-c57c-42d5-b7cd-9d3f302091bc\") " Apr 16 18:01:14.231818 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.231741 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae30821-c57c-42d5-b7cd-9d3f302091bc-openshift-service-ca-bundle\") pod \"fae30821-c57c-42d5-b7cd-9d3f302091bc\" (UID: \"fae30821-c57c-42d5-b7cd-9d3f302091bc\") " Apr 16 18:01:14.232109 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.232089 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae30821-c57c-42d5-b7cd-9d3f302091bc-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "fae30821-c57c-42d5-b7cd-9d3f302091bc" (UID: "fae30821-c57c-42d5-b7cd-9d3f302091bc"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:01:14.233763 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.233736 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae30821-c57c-42d5-b7cd-9d3f302091bc-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fae30821-c57c-42d5-b7cd-9d3f302091bc" (UID: "fae30821-c57c-42d5-b7cd-9d3f302091bc"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:01:14.333249 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.333178 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fae30821-c57c-42d5-b7cd-9d3f302091bc-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:01:14.333249 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:14.333204 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae30821-c57c-42d5-b7cd-9d3f302091bc-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:01:15.197069 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:15.197037 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl" Apr 16 18:01:15.218479 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:15.218452 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl"] Apr 16 18:01:15.235104 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:15.235081 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-8493d-5fc7d65d87-6gpbl"] Apr 16 18:01:16.096757 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:16.096721 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 18:01:16.096951 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:16.096720 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 18:01:16.162468 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:16.162430 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" path="/var/lib/kubelet/pods/fae30821-c57c-42d5-b7cd-9d3f302091bc/volumes" Apr 16 18:01:23.510848 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.510754 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654"] Apr 16 18:01:23.511222 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.511145 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" Apr 16 18:01:23.511222 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.511158 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" Apr 16 18:01:23.511222 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.511173 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" Apr 16 18:01:23.511222 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.511180 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" Apr 16 18:01:23.511222 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.511195 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerName="ensemble-graph-8493d" Apr 16 18:01:23.511222 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.511201 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerName="ensemble-graph-8493d" Apr 16 18:01:23.511406 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.511253 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="8fd58d8c-86f2-440a-bb41-f68159eed96b" containerName="kserve-container" Apr 16 18:01:23.511406 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.511268 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="fae30821-c57c-42d5-b7cd-9d3f302091bc" containerName="ensemble-graph-8493d" Apr 16 18:01:23.511406 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.511277 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="13a91386-926e-494b-8b84-b5f1685c0ef7" containerName="kserve-container" Apr 16 18:01:23.515494 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.515472 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:23.518888 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.518865 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-5e299-serving-cert\"" Apr 16 18:01:23.519183 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.519169 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-5e299-kube-rbac-proxy-sar-config\"" Apr 16 18:01:23.519474 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.519457 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:01:23.524814 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.524787 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654"] Apr 16 18:01:23.612435 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.612391 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bf988b8-2466-4ade-85d7-ee68867a3582-openshift-service-ca-bundle\") pod \"sequence-graph-5e299-6f5777484f-5l654\" (UID: \"4bf988b8-2466-4ade-85d7-ee68867a3582\") " pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:23.612613 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.612449 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls\") pod \"sequence-graph-5e299-6f5777484f-5l654\" (UID: \"4bf988b8-2466-4ade-85d7-ee68867a3582\") " pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:23.713173 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.713139 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bf988b8-2466-4ade-85d7-ee68867a3582-openshift-service-ca-bundle\") pod \"sequence-graph-5e299-6f5777484f-5l654\" (UID: \"4bf988b8-2466-4ade-85d7-ee68867a3582\") " pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:23.713342 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.713179 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls\") pod \"sequence-graph-5e299-6f5777484f-5l654\" (UID: \"4bf988b8-2466-4ade-85d7-ee68867a3582\") " pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:23.713342 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:01:23.713335 2560 secret.go:189] Couldn't get secret kserve-ci-e2e-test/sequence-graph-5e299-serving-cert: secret "sequence-graph-5e299-serving-cert" not found Apr 16 18:01:23.713432 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:01:23.713390 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls podName:4bf988b8-2466-4ade-85d7-ee68867a3582 nodeName:}" failed. No retries permitted until 2026-04-16 18:01:24.213373733 +0000 UTC m=+1252.576582585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls") pod "sequence-graph-5e299-6f5777484f-5l654" (UID: "4bf988b8-2466-4ade-85d7-ee68867a3582") : secret "sequence-graph-5e299-serving-cert" not found Apr 16 18:01:23.714017 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:23.713992 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bf988b8-2466-4ade-85d7-ee68867a3582-openshift-service-ca-bundle\") pod \"sequence-graph-5e299-6f5777484f-5l654\" (UID: \"4bf988b8-2466-4ade-85d7-ee68867a3582\") " pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:24.218920 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:24.218883 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls\") pod \"sequence-graph-5e299-6f5777484f-5l654\" (UID: \"4bf988b8-2466-4ade-85d7-ee68867a3582\") " pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:24.219138 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:01:24.219053 2560 secret.go:189] Couldn't get secret kserve-ci-e2e-test/sequence-graph-5e299-serving-cert: secret "sequence-graph-5e299-serving-cert" not found Apr 16 18:01:24.219138 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:01:24.219129 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls podName:4bf988b8-2466-4ade-85d7-ee68867a3582 nodeName:}" failed. No retries permitted until 2026-04-16 18:01:25.219106939 +0000 UTC m=+1253.582315797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls") pod "sequence-graph-5e299-6f5777484f-5l654" (UID: "4bf988b8-2466-4ade-85d7-ee68867a3582") : secret "sequence-graph-5e299-serving-cert" not found Apr 16 18:01:25.228336 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:25.228294 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls\") pod \"sequence-graph-5e299-6f5777484f-5l654\" (UID: \"4bf988b8-2466-4ade-85d7-ee68867a3582\") " pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:25.230820 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:25.230795 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls\") pod \"sequence-graph-5e299-6f5777484f-5l654\" (UID: \"4bf988b8-2466-4ade-85d7-ee68867a3582\") " pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:25.325914 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:25.325880 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:25.446349 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:25.446317 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654"] Apr 16 18:01:25.450578 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:01:25.450549 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bf988b8_2466_4ade_85d7_ee68867a3582.slice/crio-e383f5d59a0dc4b48f5324f0728f6a5af0e273b3a427c9af1fe6b97df39d42a2 WatchSource:0}: Error finding container e383f5d59a0dc4b48f5324f0728f6a5af0e273b3a427c9af1fe6b97df39d42a2: Status 404 returned error can't find the container with id e383f5d59a0dc4b48f5324f0728f6a5af0e273b3a427c9af1fe6b97df39d42a2 Apr 16 18:01:26.096633 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:26.096584 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 18:01:26.096807 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:26.096584 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 18:01:26.236208 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:26.236165 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" event={"ID":"4bf988b8-2466-4ade-85d7-ee68867a3582","Type":"ContainerStarted","Data":"3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f"} Apr 16 18:01:26.236208 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:26.236212 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" event={"ID":"4bf988b8-2466-4ade-85d7-ee68867a3582","Type":"ContainerStarted","Data":"e383f5d59a0dc4b48f5324f0728f6a5af0e273b3a427c9af1fe6b97df39d42a2"} Apr 16 18:01:26.236680 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:26.236298 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:26.252880 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:26.252816 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" podStartSLOduration=3.252802921 podStartE2EDuration="3.252802921s" podCreationTimestamp="2026-04-16 18:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:01:26.252101949 +0000 UTC m=+1254.615310816" watchObservedRunningTime="2026-04-16 18:01:26.252802921 +0000 UTC m=+1254.616011802" Apr 16 18:01:32.244489 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:32.244466 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:33.557608 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.557569 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654"] Apr 16 18:01:33.558071 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.557891 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerName="sequence-graph-5e299" containerID="cri-o://3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f" gracePeriod=30 Apr 16 18:01:33.678674 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.678641 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll"] Apr 16 18:01:33.678928 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.678903 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" containerID="cri-o://80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4" gracePeriod=30 Apr 16 18:01:33.690987 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.690950 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv"] Apr 16 18:01:33.693889 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.693869 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" Apr 16 18:01:33.704090 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.704057 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv"] Apr 16 18:01:33.705297 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.705280 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" Apr 16 18:01:33.783576 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.783544 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq"] Apr 16 18:01:33.783800 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.783776 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" containerID="cri-o://df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b" gracePeriod=30 Apr 16 18:01:33.819034 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.818952 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9"] Apr 16 18:01:33.821822 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.821803 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" Apr 16 18:01:33.833197 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.833174 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" Apr 16 18:01:33.834482 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.834463 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9"] Apr 16 18:01:33.854995 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.854965 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv"] Apr 16 18:01:33.858553 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:01:33.858514 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50869d0c_4050_4d56_9347_967234a5d825.slice/crio-fad8b210527bda92d9d415c4776710cafc6c6658351a09f033ca8cad59b8865e WatchSource:0}: Error finding container fad8b210527bda92d9d415c4776710cafc6c6658351a09f033ca8cad59b8865e: Status 404 returned error can't find the container with id fad8b210527bda92d9d415c4776710cafc6c6658351a09f033ca8cad59b8865e Apr 16 18:01:33.965476 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:33.965382 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9"] Apr 16 18:01:33.968130 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:01:33.968103 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24139345_90c9_4244_a6e4_c3b54534534b.slice/crio-127930ea7314b0fe1f8cbc49a4494287ef2d0009599a5e65683fe0d91490e4c3 WatchSource:0}: Error finding container 127930ea7314b0fe1f8cbc49a4494287ef2d0009599a5e65683fe0d91490e4c3: Status 404 returned error can't find the container with id 127930ea7314b0fe1f8cbc49a4494287ef2d0009599a5e65683fe0d91490e4c3 Apr 16 18:01:34.265504 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.265408 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" event={"ID":"24139345-90c9-4244-a6e4-c3b54534534b","Type":"ContainerStarted","Data":"c2eb3adc2fc76599c06dfe076894db79b8929d1dc94edf31a610a95ded43a6e4"} Apr 16 18:01:34.265504 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.265446 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" event={"ID":"24139345-90c9-4244-a6e4-c3b54534534b","Type":"ContainerStarted","Data":"127930ea7314b0fe1f8cbc49a4494287ef2d0009599a5e65683fe0d91490e4c3"} Apr 16 18:01:34.265746 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.265566 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" Apr 16 18:01:34.266886 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.266852 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" event={"ID":"50869d0c-4050-4d56-9347-967234a5d825","Type":"ContainerStarted","Data":"217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d"} Apr 16 18:01:34.267040 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.266891 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" event={"ID":"50869d0c-4050-4d56-9347-967234a5d825","Type":"ContainerStarted","Data":"fad8b210527bda92d9d415c4776710cafc6c6658351a09f033ca8cad59b8865e"} Apr 16 18:01:34.267112 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.267049 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" Apr 16 18:01:34.267350 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.267327 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" podUID="24139345-90c9-4244-a6e4-c3b54534534b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.46:8080: connect: connection refused" Apr 16 18:01:34.267872 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.267828 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" podUID="50869d0c-4050-4d56-9347-967234a5d825" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:01:34.283767 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.283728 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" podStartSLOduration=1.283718384 podStartE2EDuration="1.283718384s" podCreationTimestamp="2026-04-16 18:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:01:34.281856374 +0000 UTC m=+1262.645065241" watchObservedRunningTime="2026-04-16 18:01:34.283718384 +0000 UTC m=+1262.646927257" Apr 16 18:01:34.298712 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.298670 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" podStartSLOduration=1.298659384 podStartE2EDuration="1.298659384s" podCreationTimestamp="2026-04-16 18:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:01:34.296586028 +0000 UTC m=+1262.659794902" watchObservedRunningTime="2026-04-16 18:01:34.298659384 +0000 UTC m=+1262.661868256" Apr 16 18:01:34.984260 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.984215 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 18:01:34.984659 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:34.984215 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 16 18:01:35.270363 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:35.270264 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" podUID="24139345-90c9-4244-a6e4-c3b54534534b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.46:8080: connect: connection refused" Apr 16 18:01:35.270492 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:35.270370 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" podUID="50869d0c-4050-4d56-9347-967234a5d825" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:01:36.097637 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:36.097603 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" Apr 16 18:01:36.098169 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:36.098082 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" Apr 16 18:01:37.043470 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.043450 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" Apr 16 18:01:37.046708 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.046678 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" Apr 16 18:01:37.243180 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.243095 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerName="sequence-graph-5e299" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:01:37.277017 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.276986 2560 generic.go:358] "Generic (PLEG): container finished" podID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerID="df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b" exitCode=0 Apr 16 18:01:37.277176 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.277051 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" Apr 16 18:01:37.277176 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.277074 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" event={"ID":"cb206248-a3bd-489f-802e-7e9f42dc6dec","Type":"ContainerDied","Data":"df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b"} Apr 16 18:01:37.277176 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.277114 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq" event={"ID":"cb206248-a3bd-489f-802e-7e9f42dc6dec","Type":"ContainerDied","Data":"07be189a36531229014a9612a124d5d43415ba39ad355bb3055de7414880d398"} Apr 16 18:01:37.277176 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.277135 2560 scope.go:117] "RemoveContainer" containerID="df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b" Apr 16 18:01:37.278236 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.278219 2560 generic.go:358] "Generic (PLEG): container finished" podID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerID="80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4" exitCode=0 Apr 16 18:01:37.278308 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.278287 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" Apr 16 18:01:37.278308 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.278294 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" event={"ID":"832af96b-ab13-4a5e-a183-893d2434cdc1","Type":"ContainerDied","Data":"80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4"} Apr 16 18:01:37.278391 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.278320 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll" event={"ID":"832af96b-ab13-4a5e-a183-893d2434cdc1","Type":"ContainerDied","Data":"027e03f3238a76f0f9ff637e97d5be8f6f41278f4671baaae3fc5311cdbdf049"} Apr 16 18:01:37.286882 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.286630 2560 scope.go:117] "RemoveContainer" containerID="df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b" Apr 16 18:01:37.289203 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:01:37.287884 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b\": container with ID starting with df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b not found: ID does not exist" containerID="df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b" Apr 16 18:01:37.289203 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.287923 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b"} err="failed to get container status \"df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b\": rpc error: code = NotFound desc = could not find container \"df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b\": container with ID starting with df75d9871fb3603e0cd868d0b6c3e541b4e828732fb07be904ed0b425e47b62b not found: ID does not exist" Apr 16 18:01:37.289203 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.287946 2560 scope.go:117] "RemoveContainer" containerID="80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4" Apr 16 18:01:37.297972 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.297954 2560 scope.go:117] "RemoveContainer" containerID="80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4" Apr 16 18:01:37.298270 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:01:37.298249 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4\": container with ID starting with 80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4 not found: ID does not exist" containerID="80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4" Apr 16 18:01:37.298359 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.298275 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4"} err="failed to get container status \"80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4\": rpc error: code = NotFound desc = could not find container \"80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4\": container with ID starting with 80f3eb959d7ca6ee619bf4ff8fbc552b79d88040da11af8c0d4189706efb8de4 not found: ID does not exist" Apr 16 18:01:37.309589 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.309568 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll"] Apr 16 18:01:37.315417 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.315392 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e299-predictor-7d5dc4956b-d27ll"] Apr 16 18:01:37.331998 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.331974 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq"] Apr 16 18:01:37.336798 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:37.336780 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e299-predictor-75c4868959-6fvvq"] Apr 16 18:01:38.162308 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:38.162278 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" path="/var/lib/kubelet/pods/832af96b-ab13-4a5e-a183-893d2434cdc1/volumes" Apr 16 18:01:38.162518 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:38.162507 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" path="/var/lib/kubelet/pods/cb206248-a3bd-489f-802e-7e9f42dc6dec/volumes" Apr 16 18:01:42.243610 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:42.243567 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerName="sequence-graph-5e299" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:01:45.270980 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:45.270933 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" podUID="50869d0c-4050-4d56-9347-967234a5d825" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:01:45.271449 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:45.270933 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" podUID="24139345-90c9-4244-a6e4-c3b54534534b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.46:8080: connect: connection refused" Apr 16 18:01:47.242917 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:47.242879 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerName="sequence-graph-5e299" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:01:47.243318 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:47.242982 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:01:52.242731 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:52.242693 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerName="sequence-graph-5e299" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:01:54.288021 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.287985 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5"] Apr 16 18:01:54.288387 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.288332 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" Apr 16 18:01:54.288387 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.288345 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" Apr 16 18:01:54.288387 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.288370 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" Apr 16 18:01:54.288387 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.288375 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" Apr 16 18:01:54.288515 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.288426 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="cb206248-a3bd-489f-802e-7e9f42dc6dec" containerName="kserve-container" Apr 16 18:01:54.288515 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.288434 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="832af96b-ab13-4a5e-a183-893d2434cdc1" containerName="kserve-container" Apr 16 18:01:54.292874 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.292852 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:01:54.294878 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.294859 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-ed576-kube-rbac-proxy-sar-config\"" Apr 16 18:01:54.294963 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.294859 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-ed576-serving-cert\"" Apr 16 18:01:54.301545 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.301525 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5"] Apr 16 18:01:54.375352 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.375315 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0156a22f-11cf-4466-9977-49b892a7332a-openshift-service-ca-bundle\") pod \"ensemble-graph-ed576-6d6fdb76b5-86vh5\" (UID: \"0156a22f-11cf-4466-9977-49b892a7332a\") " pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:01:54.375516 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.375368 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0156a22f-11cf-4466-9977-49b892a7332a-proxy-tls\") pod \"ensemble-graph-ed576-6d6fdb76b5-86vh5\" (UID: \"0156a22f-11cf-4466-9977-49b892a7332a\") " pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:01:54.476575 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.476541 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0156a22f-11cf-4466-9977-49b892a7332a-openshift-service-ca-bundle\") pod \"ensemble-graph-ed576-6d6fdb76b5-86vh5\" (UID: \"0156a22f-11cf-4466-9977-49b892a7332a\") " pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:01:54.476755 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.476601 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0156a22f-11cf-4466-9977-49b892a7332a-proxy-tls\") pod \"ensemble-graph-ed576-6d6fdb76b5-86vh5\" (UID: \"0156a22f-11cf-4466-9977-49b892a7332a\") " pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:01:54.477170 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.477149 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0156a22f-11cf-4466-9977-49b892a7332a-openshift-service-ca-bundle\") pod \"ensemble-graph-ed576-6d6fdb76b5-86vh5\" (UID: \"0156a22f-11cf-4466-9977-49b892a7332a\") " pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:01:54.479090 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.479071 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0156a22f-11cf-4466-9977-49b892a7332a-proxy-tls\") pod \"ensemble-graph-ed576-6d6fdb76b5-86vh5\" (UID: \"0156a22f-11cf-4466-9977-49b892a7332a\") " pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:01:54.604069 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.603976 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:01:54.730068 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:54.730031 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5"] Apr 16 18:01:54.733466 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:01:54.733426 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0156a22f_11cf_4466_9977_49b892a7332a.slice/crio-e920529bde852d8f1badd0425542cea9d5b25b4ce673ebfc53408c07a7a5208c WatchSource:0}: Error finding container e920529bde852d8f1badd0425542cea9d5b25b4ce673ebfc53408c07a7a5208c: Status 404 returned error can't find the container with id e920529bde852d8f1badd0425542cea9d5b25b4ce673ebfc53408c07a7a5208c Apr 16 18:01:55.270986 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:55.270938 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" podUID="50869d0c-4050-4d56-9347-967234a5d825" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:01:55.271216 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:55.270949 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" podUID="24139345-90c9-4244-a6e4-c3b54534534b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.46:8080: connect: connection refused" Apr 16 18:01:55.343978 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:55.343936 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" event={"ID":"0156a22f-11cf-4466-9977-49b892a7332a","Type":"ContainerStarted","Data":"23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873"} Apr 16 18:01:55.343978 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:55.343981 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" event={"ID":"0156a22f-11cf-4466-9977-49b892a7332a","Type":"ContainerStarted","Data":"e920529bde852d8f1badd0425542cea9d5b25b4ce673ebfc53408c07a7a5208c"} Apr 16 18:01:55.344401 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:55.344057 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:01:55.361870 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:55.361807 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" podStartSLOduration=1.361791426 podStartE2EDuration="1.361791426s" podCreationTimestamp="2026-04-16 18:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:01:55.360887644 +0000 UTC m=+1283.724096519" watchObservedRunningTime="2026-04-16 18:01:55.361791426 +0000 UTC m=+1283.725000299" Apr 16 18:01:57.242919 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:01:57.242878 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerName="sequence-graph-5e299" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:02:01.352933 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:01.352902 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:02:02.243812 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:02.243771 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerName="sequence-graph-5e299" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:02:03.703109 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:03.703087 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:02:03.742178 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:03.742147 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls\") pod \"4bf988b8-2466-4ade-85d7-ee68867a3582\" (UID: \"4bf988b8-2466-4ade-85d7-ee68867a3582\") " Apr 16 18:02:03.742369 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:03.742209 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bf988b8-2466-4ade-85d7-ee68867a3582-openshift-service-ca-bundle\") pod \"4bf988b8-2466-4ade-85d7-ee68867a3582\" (UID: \"4bf988b8-2466-4ade-85d7-ee68867a3582\") " Apr 16 18:02:03.742648 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:03.742617 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf988b8-2466-4ade-85d7-ee68867a3582-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "4bf988b8-2466-4ade-85d7-ee68867a3582" (UID: "4bf988b8-2466-4ade-85d7-ee68867a3582"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:02:03.744120 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:03.744094 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "4bf988b8-2466-4ade-85d7-ee68867a3582" (UID: "4bf988b8-2466-4ade-85d7-ee68867a3582"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:02:03.842859 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:03.842758 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bf988b8-2466-4ade-85d7-ee68867a3582-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:02:03.842859 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:03.842788 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bf988b8-2466-4ade-85d7-ee68867a3582-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:02:04.372962 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:04.372922 2560 generic.go:358] "Generic (PLEG): container finished" podID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerID="3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f" exitCode=0 Apr 16 18:02:04.373138 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:04.373024 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" Apr 16 18:02:04.373138 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:04.373015 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" event={"ID":"4bf988b8-2466-4ade-85d7-ee68867a3582","Type":"ContainerDied","Data":"3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f"} Apr 16 18:02:04.373138 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:04.373130 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654" event={"ID":"4bf988b8-2466-4ade-85d7-ee68867a3582","Type":"ContainerDied","Data":"e383f5d59a0dc4b48f5324f0728f6a5af0e273b3a427c9af1fe6b97df39d42a2"} Apr 16 18:02:04.373242 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:04.373146 2560 scope.go:117] "RemoveContainer" containerID="3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f" Apr 16 18:02:04.382972 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:04.382953 2560 scope.go:117] "RemoveContainer" containerID="3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f" Apr 16 18:02:04.383286 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:02:04.383261 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f\": container with ID starting with 3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f not found: ID does not exist" containerID="3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f" Apr 16 18:02:04.383374 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:04.383294 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f"} err="failed to get container status \"3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f\": rpc error: code = NotFound desc = could not find container \"3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f\": container with ID starting with 3f87e7fdc7c527f3b29b1d1ad8fb319debeba55bd38b73e566fc81c6ce21bf9f not found: ID does not exist" Apr 16 18:02:04.402698 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:04.402656 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654"] Apr 16 18:02:04.411208 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:04.411176 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-5e299-6f5777484f-5l654"] Apr 16 18:02:05.271275 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:05.271228 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" podUID="50869d0c-4050-4d56-9347-967234a5d825" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:02:05.271275 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:05.271238 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" podUID="24139345-90c9-4244-a6e4-c3b54534534b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.46:8080: connect: connection refused" Apr 16 18:02:06.162168 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:06.162137 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" path="/var/lib/kubelet/pods/4bf988b8-2466-4ade-85d7-ee68867a3582/volumes" Apr 16 18:02:15.271329 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:15.271282 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" podUID="50869d0c-4050-4d56-9347-967234a5d825" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 18:02:15.271821 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:15.271297 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" podUID="24139345-90c9-4244-a6e4-c3b54534534b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.46:8080: connect: connection refused" Apr 16 18:02:25.271586 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:25.271550 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" Apr 16 18:02:25.272121 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:25.271611 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" Apr 16 18:02:43.842520 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:43.842485 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp"] Apr 16 18:02:43.842960 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:43.842811 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerName="sequence-graph-5e299" Apr 16 18:02:43.842960 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:43.842821 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerName="sequence-graph-5e299" Apr 16 18:02:43.842960 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:43.842906 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="4bf988b8-2466-4ade-85d7-ee68867a3582" containerName="sequence-graph-5e299" Apr 16 18:02:43.845365 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:43.845344 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:02:43.847978 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:43.847958 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-54da1-serving-cert\"" Apr 16 18:02:43.848605 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:43.848586 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-54da1-kube-rbac-proxy-sar-config\"" Apr 16 18:02:43.859180 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:43.859158 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp"] Apr 16 18:02:43.973547 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:43.973511 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-openshift-service-ca-bundle\") pod \"sequence-graph-54da1-594f6b7df8-f9cxp\" (UID: \"5b7d9203-7afe-43c8-93e2-356fa5ed2de3\") " pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:02:43.973721 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:43.973557 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-proxy-tls\") pod \"sequence-graph-54da1-594f6b7df8-f9cxp\" (UID: \"5b7d9203-7afe-43c8-93e2-356fa5ed2de3\") " pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:02:44.074469 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:44.074435 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-openshift-service-ca-bundle\") pod \"sequence-graph-54da1-594f6b7df8-f9cxp\" (UID: \"5b7d9203-7afe-43c8-93e2-356fa5ed2de3\") " pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:02:44.074633 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:44.074480 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-proxy-tls\") pod \"sequence-graph-54da1-594f6b7df8-f9cxp\" (UID: \"5b7d9203-7afe-43c8-93e2-356fa5ed2de3\") " pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:02:44.075239 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:44.075209 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-openshift-service-ca-bundle\") pod \"sequence-graph-54da1-594f6b7df8-f9cxp\" (UID: \"5b7d9203-7afe-43c8-93e2-356fa5ed2de3\") " pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:02:44.077122 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:44.077101 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-proxy-tls\") pod \"sequence-graph-54da1-594f6b7df8-f9cxp\" (UID: \"5b7d9203-7afe-43c8-93e2-356fa5ed2de3\") " pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:02:44.156041 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:44.155962 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:02:44.280099 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:44.280068 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp"] Apr 16 18:02:44.284411 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:02:44.284382 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b7d9203_7afe_43c8_93e2_356fa5ed2de3.slice/crio-18ac5172ca765fc660fbd5aef72c1f865e99219595aeb6d4f7d002174a8f7824 WatchSource:0}: Error finding container 18ac5172ca765fc660fbd5aef72c1f865e99219595aeb6d4f7d002174a8f7824: Status 404 returned error can't find the container with id 18ac5172ca765fc660fbd5aef72c1f865e99219595aeb6d4f7d002174a8f7824 Apr 16 18:02:44.509153 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:44.509067 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" event={"ID":"5b7d9203-7afe-43c8-93e2-356fa5ed2de3","Type":"ContainerStarted","Data":"df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130"} Apr 16 18:02:44.509153 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:44.509104 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" event={"ID":"5b7d9203-7afe-43c8-93e2-356fa5ed2de3","Type":"ContainerStarted","Data":"18ac5172ca765fc660fbd5aef72c1f865e99219595aeb6d4f7d002174a8f7824"} Apr 16 18:02:44.509386 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:44.509205 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:02:44.532672 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:44.532626 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" podStartSLOduration=1.5326119089999999 podStartE2EDuration="1.532611909s" podCreationTimestamp="2026-04-16 18:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:02:44.530519513 +0000 UTC m=+1332.893728386" watchObservedRunningTime="2026-04-16 18:02:44.532611909 +0000 UTC m=+1332.895820781" Apr 16 18:02:50.518055 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:02:50.518021 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:07:32.246955 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:07:32.246889 2560 scope.go:117] "RemoveContainer" containerID="12e7a651fdc4fb9e6485107e7fe64d4d8fab63e856edee16ffac07db86752a00" Apr 16 18:10:09.035327 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.035293 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5"] Apr 16 18:10:09.035905 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.035524 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" podUID="0156a22f-11cf-4466-9977-49b892a7332a" containerName="ensemble-graph-ed576" containerID="cri-o://23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873" gracePeriod=30 Apr 16 18:10:09.183293 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.183257 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l"] Apr 16 18:10:09.183562 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.183528 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerName="kserve-container" containerID="cri-o://2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa" gracePeriod=30 Apr 16 18:10:09.234555 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.234522 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz"] Apr 16 18:10:09.237896 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.237871 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" Apr 16 18:10:09.242409 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.242383 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8"] Apr 16 18:10:09.242628 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.242592 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" containerName="kserve-container" containerID="cri-o://ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9" gracePeriod=30 Apr 16 18:10:09.248792 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.248776 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" Apr 16 18:10:09.252939 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.252909 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz"] Apr 16 18:10:09.290912 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.290822 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d"] Apr 16 18:10:09.294248 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.294227 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" Apr 16 18:10:09.305096 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.305062 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d"] Apr 16 18:10:09.310308 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.310019 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" Apr 16 18:10:09.391138 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.391067 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz"] Apr 16 18:10:09.394553 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:10:09.394485 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-994ad7db7615b05f84f221c321365fb1b4d5dcab5955c2bd378660876d045aab WatchSource:0}: Error finding container 994ad7db7615b05f84f221c321365fb1b4d5dcab5955c2bd378660876d045aab: Status 404 returned error can't find the container with id 994ad7db7615b05f84f221c321365fb1b4d5dcab5955c2bd378660876d045aab Apr 16 18:10:09.396564 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.396542 2560 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:10:09.456463 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.456433 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d"] Apr 16 18:10:09.459547 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:10:09.459514 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-f2ac44dfe306a4f4b227a6e3d14726f4b023978b0224cc0da09402a4b1c2e27b WatchSource:0}: Error finding container f2ac44dfe306a4f4b227a6e3d14726f4b023978b0224cc0da09402a4b1c2e27b: Status 404 returned error can't find the container with id f2ac44dfe306a4f4b227a6e3d14726f4b023978b0224cc0da09402a4b1c2e27b Apr 16 18:10:09.937751 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.937718 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" event={"ID":"16fafe48-5326-4e35-b7f6-8a744d9661dc","Type":"ContainerStarted","Data":"3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7"} Apr 16 18:10:09.937751 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.937753 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" event={"ID":"16fafe48-5326-4e35-b7f6-8a744d9661dc","Type":"ContainerStarted","Data":"994ad7db7615b05f84f221c321365fb1b4d5dcab5955c2bd378660876d045aab"} Apr 16 18:10:09.938042 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.937939 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" Apr 16 18:10:09.939233 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.939207 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" event={"ID":"d3f57269-bb73-4cdd-a856-b36af239dfbc","Type":"ContainerStarted","Data":"7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53"} Apr 16 18:10:09.939370 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.939236 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" event={"ID":"d3f57269-bb73-4cdd-a856-b36af239dfbc","Type":"ContainerStarted","Data":"f2ac44dfe306a4f4b227a6e3d14726f4b023978b0224cc0da09402a4b1c2e27b"} Apr 16 18:10:09.939370 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.939295 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.49:8080: connect: connection refused" Apr 16 18:10:09.939491 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.939363 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" Apr 16 18:10:09.940327 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.940303 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.50:8080: connect: connection refused" Apr 16 18:10:09.955514 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.955470 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" podStartSLOduration=0.955458434 podStartE2EDuration="955.458434ms" podCreationTimestamp="2026-04-16 18:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:10:09.954071964 +0000 UTC m=+1778.317280848" watchObservedRunningTime="2026-04-16 18:10:09.955458434 +0000 UTC m=+1778.318667307" Apr 16 18:10:09.970414 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:09.970377 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" podStartSLOduration=0.970366915 podStartE2EDuration="970.366915ms" podCreationTimestamp="2026-04-16 18:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:10:09.968774207 +0000 UTC m=+1778.331983079" watchObservedRunningTime="2026-04-16 18:10:09.970366915 +0000 UTC m=+1778.333575787" Apr 16 18:10:10.942137 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:10.942094 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.50:8080: connect: connection refused" Apr 16 18:10:10.942137 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:10.942124 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.49:8080: connect: connection refused" Apr 16 18:10:11.351032 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:11.350988 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" podUID="0156a22f-11cf-4466-9977-49b892a7332a" containerName="ensemble-graph-ed576" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:10:12.391412 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.391391 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" Apr 16 18:10:12.719106 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.719084 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" Apr 16 18:10:12.951013 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.950904 2560 generic.go:358] "Generic (PLEG): container finished" podID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerID="2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa" exitCode=0 Apr 16 18:10:12.951013 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.950963 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" Apr 16 18:10:12.951013 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.950985 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" event={"ID":"c85f1a3e-ffb7-4774-9193-1518e14433dc","Type":"ContainerDied","Data":"2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa"} Apr 16 18:10:12.951276 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.951026 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l" event={"ID":"c85f1a3e-ffb7-4774-9193-1518e14433dc","Type":"ContainerDied","Data":"6616707ba632cdec0c9273625db5b025341d5ce823d91fa36cf1fd29d911d2fb"} Apr 16 18:10:12.951276 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.951049 2560 scope.go:117] "RemoveContainer" containerID="2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa" Apr 16 18:10:12.952312 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.952290 2560 generic.go:358] "Generic (PLEG): container finished" podID="a6854379-2a41-4136-8e6f-2efba510f34f" containerID="ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9" exitCode=0 Apr 16 18:10:12.952402 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.952338 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" Apr 16 18:10:12.952402 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.952349 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" event={"ID":"a6854379-2a41-4136-8e6f-2efba510f34f","Type":"ContainerDied","Data":"ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9"} Apr 16 18:10:12.952402 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.952373 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8" event={"ID":"a6854379-2a41-4136-8e6f-2efba510f34f","Type":"ContainerDied","Data":"7168e2c1b512fb8a81412d57638ddeadd998c94dc357cc8ec9677d90ad99ecdc"} Apr 16 18:10:12.959522 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.959505 2560 scope.go:117] "RemoveContainer" containerID="2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa" Apr 16 18:10:12.959807 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:10:12.959760 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa\": container with ID starting with 2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa not found: ID does not exist" containerID="2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa" Apr 16 18:10:12.959807 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.959786 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa"} err="failed to get container status \"2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa\": rpc error: code = NotFound desc = could not find container \"2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa\": container with ID starting with 2c1fab621892e7575e2dba53ed85d3e8482024f36c3838842430c580146bcbaa not found: ID does not exist" Apr 16 18:10:12.959807 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.959804 2560 scope.go:117] "RemoveContainer" containerID="ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9" Apr 16 18:10:12.966889 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.966859 2560 scope.go:117] "RemoveContainer" containerID="ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9" Apr 16 18:10:12.967130 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:10:12.967109 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9\": container with ID starting with ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9 not found: ID does not exist" containerID="ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9" Apr 16 18:10:12.967176 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.967138 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9"} err="failed to get container status \"ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9\": rpc error: code = NotFound desc = could not find container \"ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9\": container with ID starting with ca09b668eba72c83eaf6f184f33b2df710632d980ef67eeadca174815f4cd0a9 not found: ID does not exist" Apr 16 18:10:12.985078 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.985054 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8"] Apr 16 18:10:12.988233 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:12.988215 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-ed576-predictor-65c7fd45bf-jfsc8"] Apr 16 18:10:13.010026 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:13.010003 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l"] Apr 16 18:10:13.015275 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:13.015253 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed576-predictor-78dc55c9fb-zr27l"] Apr 16 18:10:14.166705 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:14.166669 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" path="/var/lib/kubelet/pods/a6854379-2a41-4136-8e6f-2efba510f34f/volumes" Apr 16 18:10:14.167102 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:14.166934 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" path="/var/lib/kubelet/pods/c85f1a3e-ffb7-4774-9193-1518e14433dc/volumes" Apr 16 18:10:16.350942 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:16.350898 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" podUID="0156a22f-11cf-4466-9977-49b892a7332a" containerName="ensemble-graph-ed576" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:10:20.942413 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:20.942373 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.49:8080: connect: connection refused" Apr 16 18:10:20.942814 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:20.942379 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.50:8080: connect: connection refused" Apr 16 18:10:21.351206 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:21.351161 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" podUID="0156a22f-11cf-4466-9977-49b892a7332a" containerName="ensemble-graph-ed576" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:10:21.351382 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:21.351290 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:10:26.351170 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:26.351131 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" podUID="0156a22f-11cf-4466-9977-49b892a7332a" containerName="ensemble-graph-ed576" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:10:30.942757 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:30.942717 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.50:8080: connect: connection refused" Apr 16 18:10:30.943161 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:30.942724 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.49:8080: connect: connection refused" Apr 16 18:10:31.351278 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:31.351238 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" podUID="0156a22f-11cf-4466-9977-49b892a7332a" containerName="ensemble-graph-ed576" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:10:36.350480 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:36.350436 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" podUID="0156a22f-11cf-4466-9977-49b892a7332a" containerName="ensemble-graph-ed576" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:10:39.172510 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:39.172484 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:10:39.261341 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:39.261310 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0156a22f-11cf-4466-9977-49b892a7332a-proxy-tls\") pod \"0156a22f-11cf-4466-9977-49b892a7332a\" (UID: \"0156a22f-11cf-4466-9977-49b892a7332a\") " Apr 16 18:10:39.261512 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:39.261360 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0156a22f-11cf-4466-9977-49b892a7332a-openshift-service-ca-bundle\") pod \"0156a22f-11cf-4466-9977-49b892a7332a\" (UID: \"0156a22f-11cf-4466-9977-49b892a7332a\") " Apr 16 18:10:39.261706 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:39.261684 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0156a22f-11cf-4466-9977-49b892a7332a-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "0156a22f-11cf-4466-9977-49b892a7332a" (UID: "0156a22f-11cf-4466-9977-49b892a7332a"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:10:39.263303 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:39.263282 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0156a22f-11cf-4466-9977-49b892a7332a-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0156a22f-11cf-4466-9977-49b892a7332a" (UID: "0156a22f-11cf-4466-9977-49b892a7332a"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:10:39.362686 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:39.362610 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0156a22f-11cf-4466-9977-49b892a7332a-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:10:39.362686 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:39.362635 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0156a22f-11cf-4466-9977-49b892a7332a-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:10:40.043538 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.043503 2560 generic.go:358] "Generic (PLEG): container finished" podID="0156a22f-11cf-4466-9977-49b892a7332a" containerID="23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873" exitCode=0 Apr 16 18:10:40.043719 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.043573 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" Apr 16 18:10:40.043719 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.043572 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" event={"ID":"0156a22f-11cf-4466-9977-49b892a7332a","Type":"ContainerDied","Data":"23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873"} Apr 16 18:10:40.043719 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.043678 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5" event={"ID":"0156a22f-11cf-4466-9977-49b892a7332a","Type":"ContainerDied","Data":"e920529bde852d8f1badd0425542cea9d5b25b4ce673ebfc53408c07a7a5208c"} Apr 16 18:10:40.043719 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.043698 2560 scope.go:117] "RemoveContainer" containerID="23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873" Apr 16 18:10:40.052643 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.052623 2560 scope.go:117] "RemoveContainer" containerID="23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873" Apr 16 18:10:40.052927 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:10:40.052902 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873\": container with ID starting with 23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873 not found: ID does not exist" containerID="23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873" Apr 16 18:10:40.052979 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.052941 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873"} err="failed to get container status \"23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873\": rpc error: code = NotFound desc = could not find container \"23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873\": container with ID starting with 23d37442cbe780d557e52f9d1abb0413e63197b71bcf4b7b82b58ff5ddcc6873 not found: ID does not exist" Apr 16 18:10:40.066004 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.065983 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5"] Apr 16 18:10:40.072206 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.072184 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ed576-6d6fdb76b5-86vh5"] Apr 16 18:10:40.162396 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.162367 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0156a22f-11cf-4466-9977-49b892a7332a" path="/var/lib/kubelet/pods/0156a22f-11cf-4466-9977-49b892a7332a/volumes" Apr 16 18:10:40.942399 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.942354 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.50:8080: connect: connection refused" Apr 16 18:10:40.942825 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:40.942353 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.49:8080: connect: connection refused" Apr 16 18:10:50.942825 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:50.942771 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.49:8080: connect: connection refused" Apr 16 18:10:50.943348 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:50.942771 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.50:8080: connect: connection refused" Apr 16 18:10:58.490953 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.490919 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp"] Apr 16 18:10:58.491341 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.491158 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerName="sequence-graph-54da1" containerID="cri-o://df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130" gracePeriod=30 Apr 16 18:10:58.666378 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.666338 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j"] Apr 16 18:10:58.666926 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.666909 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerName="kserve-container" Apr 16 18:10:58.666992 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.666930 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerName="kserve-container" Apr 16 18:10:58.666992 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.666949 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0156a22f-11cf-4466-9977-49b892a7332a" containerName="ensemble-graph-ed576" Apr 16 18:10:58.666992 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.666959 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="0156a22f-11cf-4466-9977-49b892a7332a" containerName="ensemble-graph-ed576" Apr 16 18:10:58.667120 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.667000 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" containerName="kserve-container" Apr 16 18:10:58.667120 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.667010 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" containerName="kserve-container" Apr 16 18:10:58.667120 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.667090 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="a6854379-2a41-4136-8e6f-2efba510f34f" containerName="kserve-container" Apr 16 18:10:58.667120 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.667104 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="0156a22f-11cf-4466-9977-49b892a7332a" containerName="ensemble-graph-ed576" Apr 16 18:10:58.667120 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.667119 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="c85f1a3e-ffb7-4774-9193-1518e14433dc" containerName="kserve-container" Apr 16 18:10:58.671768 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.671744 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" Apr 16 18:10:58.682329 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.682299 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j"] Apr 16 18:10:58.683684 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.683662 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" Apr 16 18:10:58.700413 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.700384 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv"] Apr 16 18:10:58.700731 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.700671 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" podUID="50869d0c-4050-4d56-9347-967234a5d825" containerName="kserve-container" containerID="cri-o://217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d" gracePeriod=30 Apr 16 18:10:58.797757 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.797722 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9"] Apr 16 18:10:58.798187 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.798155 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" podUID="24139345-90c9-4244-a6e4-c3b54534534b" containerName="kserve-container" containerID="cri-o://c2eb3adc2fc76599c06dfe076894db79b8929d1dc94edf31a610a95ded43a6e4" gracePeriod=30 Apr 16 18:10:58.819347 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.819318 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7"] Apr 16 18:10:58.823739 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.823717 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" Apr 16 18:10:58.828122 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.828104 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j"] Apr 16 18:10:58.831330 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:10:58.831301 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cafcbbe_191d_4b2c_9512_ad9e7c778f76.slice/crio-f172b827ed2bd958e6cbc08dfc234968a12cf18bebb41231d0609f12b5c77ef6 WatchSource:0}: Error finding container f172b827ed2bd958e6cbc08dfc234968a12cf18bebb41231d0609f12b5c77ef6: Status 404 returned error can't find the container with id f172b827ed2bd958e6cbc08dfc234968a12cf18bebb41231d0609f12b5c77ef6 Apr 16 18:10:58.833290 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.833271 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7"] Apr 16 18:10:58.845853 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.845813 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" Apr 16 18:10:58.978859 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:58.978773 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7"] Apr 16 18:10:58.981328 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:10:58.981306 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod985ae6ae_2c9a_4dfa_a0fa_691306696ef9.slice/crio-e26951a7ec98fabaaf7b6a30c760e8628eb0e5b73bd45bad127fe205393f59f5 WatchSource:0}: Error finding container e26951a7ec98fabaaf7b6a30c760e8628eb0e5b73bd45bad127fe205393f59f5: Status 404 returned error can't find the container with id e26951a7ec98fabaaf7b6a30c760e8628eb0e5b73bd45bad127fe205393f59f5 Apr 16 18:10:59.113497 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:59.113448 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" event={"ID":"3cafcbbe-191d-4b2c-9512-ad9e7c778f76","Type":"ContainerStarted","Data":"e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a"} Apr 16 18:10:59.113497 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:59.113508 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" event={"ID":"3cafcbbe-191d-4b2c-9512-ad9e7c778f76","Type":"ContainerStarted","Data":"f172b827ed2bd958e6cbc08dfc234968a12cf18bebb41231d0609f12b5c77ef6"} Apr 16 18:10:59.113753 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:59.113722 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" Apr 16 18:10:59.114752 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:59.114719 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" event={"ID":"985ae6ae-2c9a-4dfa-a0fa-691306696ef9","Type":"ContainerStarted","Data":"e26951a7ec98fabaaf7b6a30c760e8628eb0e5b73bd45bad127fe205393f59f5"} Apr 16 18:10:59.115610 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:59.115579 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.51:8080: connect: connection refused" Apr 16 18:10:59.146721 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:10:59.146672 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" podStartSLOduration=1.146657775 podStartE2EDuration="1.146657775s" podCreationTimestamp="2026-04-16 18:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:10:59.146039673 +0000 UTC m=+1827.509248546" watchObservedRunningTime="2026-04-16 18:10:59.146657775 +0000 UTC m=+1827.509866648" Apr 16 18:11:00.120694 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:00.120660 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" event={"ID":"985ae6ae-2c9a-4dfa-a0fa-691306696ef9","Type":"ContainerStarted","Data":"af6b67c862b5b53ad6f04556b3e983c618de4df640e63e2164d7e66f6827f96b"} Apr 16 18:11:00.121175 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:00.121084 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.51:8080: connect: connection refused" Apr 16 18:11:00.143046 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:00.143004 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" podStartSLOduration=2.142983227 podStartE2EDuration="2.142983227s" podCreationTimestamp="2026-04-16 18:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:11:00.142951776 +0000 UTC m=+1828.506160648" watchObservedRunningTime="2026-04-16 18:11:00.142983227 +0000 UTC m=+1828.506192099" Apr 16 18:11:00.517140 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:00.517053 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerName="sequence-graph-54da1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:00.943020 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:00.942977 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" Apr 16 18:11:00.943192 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:00.943035 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" Apr 16 18:11:01.124526 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:01.124497 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" Apr 16 18:11:01.125766 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:01.125733 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.52:8080: connect: connection refused" Apr 16 18:11:02.055804 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.055781 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" Apr 16 18:11:02.129048 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.129016 2560 generic.go:358] "Generic (PLEG): container finished" podID="50869d0c-4050-4d56-9347-967234a5d825" containerID="217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d" exitCode=0 Apr 16 18:11:02.129554 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.129083 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" Apr 16 18:11:02.129554 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.129096 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" event={"ID":"50869d0c-4050-4d56-9347-967234a5d825","Type":"ContainerDied","Data":"217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d"} Apr 16 18:11:02.129554 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.129134 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv" event={"ID":"50869d0c-4050-4d56-9347-967234a5d825","Type":"ContainerDied","Data":"fad8b210527bda92d9d415c4776710cafc6c6658351a09f033ca8cad59b8865e"} Apr 16 18:11:02.129554 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.129150 2560 scope.go:117] "RemoveContainer" containerID="217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d" Apr 16 18:11:02.130890 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.130865 2560 generic.go:358] "Generic (PLEG): container finished" podID="24139345-90c9-4244-a6e4-c3b54534534b" containerID="c2eb3adc2fc76599c06dfe076894db79b8929d1dc94edf31a610a95ded43a6e4" exitCode=0 Apr 16 18:11:02.131029 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.130950 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" event={"ID":"24139345-90c9-4244-a6e4-c3b54534534b","Type":"ContainerDied","Data":"c2eb3adc2fc76599c06dfe076894db79b8929d1dc94edf31a610a95ded43a6e4"} Apr 16 18:11:02.131368 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.131346 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.52:8080: connect: connection refused" Apr 16 18:11:02.143032 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.143003 2560 scope.go:117] "RemoveContainer" containerID="217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d" Apr 16 18:11:02.143306 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:11:02.143288 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d\": container with ID starting with 217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d not found: ID does not exist" containerID="217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d" Apr 16 18:11:02.143373 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.143319 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d"} err="failed to get container status \"217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d\": rpc error: code = NotFound desc = could not find container \"217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d\": container with ID starting with 217467225bedd85aad71674beff83d6433645585fd9a6a744d57b38385bba76d not found: ID does not exist" Apr 16 18:11:02.144655 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.144638 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" Apr 16 18:11:02.162126 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.162103 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv"] Apr 16 18:11:02.171714 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:02.170735 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-54da1-predictor-59846b8c76-rhmxv"] Apr 16 18:11:03.136401 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:03.136367 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" event={"ID":"24139345-90c9-4244-a6e4-c3b54534534b","Type":"ContainerDied","Data":"127930ea7314b0fe1f8cbc49a4494287ef2d0009599a5e65683fe0d91490e4c3"} Apr 16 18:11:03.136401 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:03.136389 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9" Apr 16 18:11:03.136951 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:03.136412 2560 scope.go:117] "RemoveContainer" containerID="c2eb3adc2fc76599c06dfe076894db79b8929d1dc94edf31a610a95ded43a6e4" Apr 16 18:11:03.160540 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:03.160492 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9"] Apr 16 18:11:03.162002 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:03.161978 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-54da1-predictor-bf96fd7c-p48j9"] Apr 16 18:11:04.161549 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:04.161516 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24139345-90c9-4244-a6e4-c3b54534534b" path="/var/lib/kubelet/pods/24139345-90c9-4244-a6e4-c3b54534534b/volumes" Apr 16 18:11:04.161953 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:04.161765 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50869d0c-4050-4d56-9347-967234a5d825" path="/var/lib/kubelet/pods/50869d0c-4050-4d56-9347-967234a5d825/volumes" Apr 16 18:11:05.517122 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:05.517084 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerName="sequence-graph-54da1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:10.121789 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:10.121748 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.51:8080: connect: connection refused" Apr 16 18:11:10.517111 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:10.517027 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerName="sequence-graph-54da1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:10.517281 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:10.517137 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:11:12.131890 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:12.131819 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.52:8080: connect: connection refused" Apr 16 18:11:15.516575 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:15.516537 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerName="sequence-graph-54da1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:19.327972 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.327940 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz"] Apr 16 18:11:19.328336 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.328287 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24139345-90c9-4244-a6e4-c3b54534534b" containerName="kserve-container" Apr 16 18:11:19.328336 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.328298 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="24139345-90c9-4244-a6e4-c3b54534534b" containerName="kserve-container" Apr 16 18:11:19.328336 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.328309 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50869d0c-4050-4d56-9347-967234a5d825" containerName="kserve-container" Apr 16 18:11:19.328336 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.328315 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="50869d0c-4050-4d56-9347-967234a5d825" containerName="kserve-container" Apr 16 18:11:19.328472 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.328379 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="24139345-90c9-4244-a6e4-c3b54534534b" containerName="kserve-container" Apr 16 18:11:19.328472 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.328389 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="50869d0c-4050-4d56-9347-967234a5d825" containerName="kserve-container" Apr 16 18:11:19.332884 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.332808 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:19.336216 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.336190 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-3b160-serving-cert\"" Apr 16 18:11:19.336374 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.336356 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-3b160-kube-rbac-proxy-sar-config\"" Apr 16 18:11:19.347543 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.347522 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz"] Apr 16 18:11:19.380149 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.380118 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-openshift-service-ca-bundle\") pod \"splitter-graph-3b160-555ff4db4b-rhktz\" (UID: \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\") " pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:19.380332 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.380218 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-proxy-tls\") pod \"splitter-graph-3b160-555ff4db4b-rhktz\" (UID: \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\") " pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:19.480828 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.480790 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-openshift-service-ca-bundle\") pod \"splitter-graph-3b160-555ff4db4b-rhktz\" (UID: \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\") " pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:19.481008 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.480905 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-proxy-tls\") pod \"splitter-graph-3b160-555ff4db4b-rhktz\" (UID: \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\") " pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:19.481083 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:11:19.481059 2560 secret.go:189] Couldn't get secret kserve-ci-e2e-test/splitter-graph-3b160-serving-cert: secret "splitter-graph-3b160-serving-cert" not found Apr 16 18:11:19.481150 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:11:19.481141 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-proxy-tls podName:76c6d475-d05d-4cf4-a311-08c9c7e4f5dc nodeName:}" failed. No retries permitted until 2026-04-16 18:11:19.981123502 +0000 UTC m=+1848.344332353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-proxy-tls") pod "splitter-graph-3b160-555ff4db4b-rhktz" (UID: "76c6d475-d05d-4cf4-a311-08c9c7e4f5dc") : secret "splitter-graph-3b160-serving-cert" not found Apr 16 18:11:19.481461 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.481444 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-openshift-service-ca-bundle\") pod \"splitter-graph-3b160-555ff4db4b-rhktz\" (UID: \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\") " pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:19.985144 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.985102 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-proxy-tls\") pod \"splitter-graph-3b160-555ff4db4b-rhktz\" (UID: \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\") " pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:19.987690 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:19.987664 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-proxy-tls\") pod \"splitter-graph-3b160-555ff4db4b-rhktz\" (UID: \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\") " pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:20.121798 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:20.121754 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.51:8080: connect: connection refused" Apr 16 18:11:20.242882 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:20.242787 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:20.367286 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:20.367243 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz"] Apr 16 18:11:20.369927 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:11:20.369901 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76c6d475_d05d_4cf4_a311_08c9c7e4f5dc.slice/crio-70413126eb50e3d17eb137db01823383a8a2804d9c4d1010a037cfc65e460abc WatchSource:0}: Error finding container 70413126eb50e3d17eb137db01823383a8a2804d9c4d1010a037cfc65e460abc: Status 404 returned error can't find the container with id 70413126eb50e3d17eb137db01823383a8a2804d9c4d1010a037cfc65e460abc Apr 16 18:11:20.517652 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:20.517567 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerName="sequence-graph-54da1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:21.196008 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:21.195972 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" event={"ID":"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc","Type":"ContainerStarted","Data":"53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e"} Apr 16 18:11:21.196008 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:21.196012 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" event={"ID":"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc","Type":"ContainerStarted","Data":"70413126eb50e3d17eb137db01823383a8a2804d9c4d1010a037cfc65e460abc"} Apr 16 18:11:21.196242 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:21.196139 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:21.231190 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:21.231116 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" podStartSLOduration=2.231099176 podStartE2EDuration="2.231099176s" podCreationTimestamp="2026-04-16 18:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:11:21.23064171 +0000 UTC m=+1849.593850582" watchObservedRunningTime="2026-04-16 18:11:21.231099176 +0000 UTC m=+1849.594308049" Apr 16 18:11:22.131755 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:22.131711 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.52:8080: connect: connection refused" Apr 16 18:11:25.516130 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:25.516084 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerName="sequence-graph-54da1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:27.204565 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:27.204537 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:28.631233 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:28.631210 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:11:28.767252 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:28.767150 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-proxy-tls\") pod \"5b7d9203-7afe-43c8-93e2-356fa5ed2de3\" (UID: \"5b7d9203-7afe-43c8-93e2-356fa5ed2de3\") " Apr 16 18:11:28.767414 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:28.767268 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-openshift-service-ca-bundle\") pod \"5b7d9203-7afe-43c8-93e2-356fa5ed2de3\" (UID: \"5b7d9203-7afe-43c8-93e2-356fa5ed2de3\") " Apr 16 18:11:28.767636 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:28.767604 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "5b7d9203-7afe-43c8-93e2-356fa5ed2de3" (UID: "5b7d9203-7afe-43c8-93e2-356fa5ed2de3"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:11:28.769185 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:28.769161 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "5b7d9203-7afe-43c8-93e2-356fa5ed2de3" (UID: "5b7d9203-7afe-43c8-93e2-356fa5ed2de3"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:11:28.868430 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:28.868392 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:11:28.868430 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:28.868425 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b7d9203-7afe-43c8-93e2-356fa5ed2de3-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:11:29.223777 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.223739 2560 generic.go:358] "Generic (PLEG): container finished" podID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerID="df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130" exitCode=0 Apr 16 18:11:29.223993 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.223794 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" event={"ID":"5b7d9203-7afe-43c8-93e2-356fa5ed2de3","Type":"ContainerDied","Data":"df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130"} Apr 16 18:11:29.223993 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.223825 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" Apr 16 18:11:29.223993 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.223829 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp" event={"ID":"5b7d9203-7afe-43c8-93e2-356fa5ed2de3","Type":"ContainerDied","Data":"18ac5172ca765fc660fbd5aef72c1f865e99219595aeb6d4f7d002174a8f7824"} Apr 16 18:11:29.223993 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.223870 2560 scope.go:117] "RemoveContainer" containerID="df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130" Apr 16 18:11:29.234515 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.234496 2560 scope.go:117] "RemoveContainer" containerID="df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130" Apr 16 18:11:29.234751 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:11:29.234729 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130\": container with ID starting with df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130 not found: ID does not exist" containerID="df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130" Apr 16 18:11:29.234815 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.234760 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130"} err="failed to get container status \"df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130\": rpc error: code = NotFound desc = could not find container \"df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130\": container with ID starting with df583724c0b4563517f3185eeded61c3d33281297cffe8170f2c562021c31130 not found: ID does not exist" Apr 16 18:11:29.255486 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.255464 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp"] Apr 16 18:11:29.257944 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.257923 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-54da1-594f6b7df8-f9cxp"] Apr 16 18:11:29.470853 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.470813 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz"] Apr 16 18:11:29.471140 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.471102 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerName="splitter-graph-3b160" containerID="cri-o://53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e" gracePeriod=30 Apr 16 18:11:29.696787 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.696751 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz"] Apr 16 18:11:29.697176 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.697030 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" containerID="cri-o://3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7" gracePeriod=30 Apr 16 18:11:29.832993 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.832959 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw"] Apr 16 18:11:29.833351 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.833337 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerName="sequence-graph-54da1" Apr 16 18:11:29.833412 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.833354 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerName="sequence-graph-54da1" Apr 16 18:11:29.833455 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.833424 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" containerName="sequence-graph-54da1" Apr 16 18:11:29.837742 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.837723 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" Apr 16 18:11:29.847116 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.847094 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" Apr 16 18:11:29.856015 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.855991 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw"] Apr 16 18:11:29.945182 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.945140 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d"] Apr 16 18:11:29.945487 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.945460 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" containerID="cri-o://7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53" gracePeriod=30 Apr 16 18:11:29.985326 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.985298 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2"] Apr 16 18:11:29.989851 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:29.989815 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" Apr 16 18:11:30.001301 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.001281 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" Apr 16 18:11:30.005415 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.005381 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2"] Apr 16 18:11:30.033944 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.033916 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw"] Apr 16 18:11:30.039063 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:11:30.039036 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7f6a8ed_ac6d_4bc2_9448_deec62a9beaa.slice/crio-b90960a8d2cc07c530335d37a0e95d546d02b57778c1d4c83cca2e6157670683 WatchSource:0}: Error finding container b90960a8d2cc07c530335d37a0e95d546d02b57778c1d4c83cca2e6157670683: Status 404 returned error can't find the container with id b90960a8d2cc07c530335d37a0e95d546d02b57778c1d4c83cca2e6157670683 Apr 16 18:11:30.121251 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.121217 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.51:8080: connect: connection refused" Apr 16 18:11:30.149555 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.149531 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2"] Apr 16 18:11:30.152008 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:11:30.151967 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf82c040f_64ab_4fbf_944e_1bc56cc84fb6.slice/crio-e89e71589952f1a3168aaef2d99fcb3d721a6f88937ed76d51832ea56b4ebaf1 WatchSource:0}: Error finding container e89e71589952f1a3168aaef2d99fcb3d721a6f88937ed76d51832ea56b4ebaf1: Status 404 returned error can't find the container with id e89e71589952f1a3168aaef2d99fcb3d721a6f88937ed76d51832ea56b4ebaf1 Apr 16 18:11:30.163444 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.163416 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b7d9203-7afe-43c8-93e2-356fa5ed2de3" path="/var/lib/kubelet/pods/5b7d9203-7afe-43c8-93e2-356fa5ed2de3/volumes" Apr 16 18:11:30.227999 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.227967 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" event={"ID":"f82c040f-64ab-4fbf-944e-1bc56cc84fb6","Type":"ContainerStarted","Data":"6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee"} Apr 16 18:11:30.227999 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.228009 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" event={"ID":"f82c040f-64ab-4fbf-944e-1bc56cc84fb6","Type":"ContainerStarted","Data":"e89e71589952f1a3168aaef2d99fcb3d721a6f88937ed76d51832ea56b4ebaf1"} Apr 16 18:11:30.229263 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.229239 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" event={"ID":"a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa","Type":"ContainerStarted","Data":"db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e"} Apr 16 18:11:30.229263 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.229270 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" event={"ID":"a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa","Type":"ContainerStarted","Data":"b90960a8d2cc07c530335d37a0e95d546d02b57778c1d4c83cca2e6157670683"} Apr 16 18:11:30.229533 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.229500 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" Apr 16 18:11:30.230599 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.230575 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.54:8080: connect: connection refused" Apr 16 18:11:30.256563 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.256518 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" podStartSLOduration=1.25650369 podStartE2EDuration="1.25650369s" podCreationTimestamp="2026-04-16 18:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:11:30.25549829 +0000 UTC m=+1858.618707164" watchObservedRunningTime="2026-04-16 18:11:30.25650369 +0000 UTC m=+1858.619712562" Apr 16 18:11:30.942694 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.942649 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.50:8080: connect: connection refused" Apr 16 18:11:30.943100 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:30.942664 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.49:8080: connect: connection refused" Apr 16 18:11:31.233081 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:31.232988 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.54:8080: connect: connection refused" Apr 16 18:11:31.260722 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:31.260671 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" podStartSLOduration=2.260654246 podStartE2EDuration="2.260654246s" podCreationTimestamp="2026-04-16 18:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:11:31.259513926 +0000 UTC m=+1859.622722800" watchObservedRunningTime="2026-04-16 18:11:31.260654246 +0000 UTC m=+1859.623863119" Apr 16 18:11:32.131910 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:32.131857 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.52:8080: connect: connection refused" Apr 16 18:11:32.203194 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:32.203150 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerName="splitter-graph-3b160" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:32.237197 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:32.237159 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" Apr 16 18:11:32.238509 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:32.238486 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.55:8080: connect: connection refused" Apr 16 18:11:33.241660 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:33.241628 2560 generic.go:358] "Generic (PLEG): container finished" podID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerID="7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53" exitCode=0 Apr 16 18:11:33.242068 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:33.241766 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" event={"ID":"d3f57269-bb73-4cdd-a856-b36af239dfbc","Type":"ContainerDied","Data":"7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53"} Apr 16 18:11:33.244322 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:33.244273 2560 generic.go:358] "Generic (PLEG): container finished" podID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerID="3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7" exitCode=0 Apr 16 18:11:33.244969 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:33.244549 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" event={"ID":"16fafe48-5326-4e35-b7f6-8a744d9661dc","Type":"ContainerDied","Data":"3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7"} Apr 16 18:11:33.244969 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:33.244865 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.55:8080: connect: connection refused" Apr 16 18:11:33.255820 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:33.255798 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" Apr 16 18:11:33.303273 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:33.303242 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" Apr 16 18:11:34.249617 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:34.249587 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" Apr 16 18:11:34.249617 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:34.249607 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d" event={"ID":"d3f57269-bb73-4cdd-a856-b36af239dfbc","Type":"ContainerDied","Data":"f2ac44dfe306a4f4b227a6e3d14726f4b023978b0224cc0da09402a4b1c2e27b"} Apr 16 18:11:34.250172 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:34.249644 2560 scope.go:117] "RemoveContainer" containerID="7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53" Apr 16 18:11:34.250795 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:34.250769 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" event={"ID":"16fafe48-5326-4e35-b7f6-8a744d9661dc","Type":"ContainerDied","Data":"994ad7db7615b05f84f221c321365fb1b4d5dcab5955c2bd378660876d045aab"} Apr 16 18:11:34.250795 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:34.250794 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz" Apr 16 18:11:34.257721 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:34.257701 2560 scope.go:117] "RemoveContainer" containerID="3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7" Apr 16 18:11:34.274733 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:34.274704 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d"] Apr 16 18:11:34.279586 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:34.279557 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3b160-predictor-788d9b5895-7hb2d"] Apr 16 18:11:34.298401 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:34.298367 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz"] Apr 16 18:11:34.304200 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:34.304176 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3b160-predictor-59b6c8c9d-6ksnz"] Apr 16 18:11:36.162049 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:36.162008 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" path="/var/lib/kubelet/pods/16fafe48-5326-4e35-b7f6-8a744d9661dc/volumes" Apr 16 18:11:36.162417 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:36.162251 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" path="/var/lib/kubelet/pods/d3f57269-bb73-4cdd-a856-b36af239dfbc/volumes" Apr 16 18:11:37.203772 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:37.203730 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerName="splitter-graph-3b160" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:40.121388 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:40.121343 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.51:8080: connect: connection refused" Apr 16 18:11:41.233173 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:41.233129 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.54:8080: connect: connection refused" Apr 16 18:11:42.132329 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:42.132280 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.52:8080: connect: connection refused" Apr 16 18:11:42.203205 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:42.203159 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerName="splitter-graph-3b160" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:42.203383 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:42.203267 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:43.245877 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:43.245806 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.55:8080: connect: connection refused" Apr 16 18:11:47.203174 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:47.203124 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerName="splitter-graph-3b160" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:50.122018 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:50.121984 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" Apr 16 18:11:51.233439 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:51.233349 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.54:8080: connect: connection refused" Apr 16 18:11:52.132818 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:52.132789 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" Apr 16 18:11:52.204998 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:52.204952 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerName="splitter-graph-3b160" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:53.245439 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:53.245392 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.55:8080: connect: connection refused" Apr 16 18:11:57.203161 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:57.203119 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerName="splitter-graph-3b160" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:11:59.492436 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:11:59.492210 2560 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-994ad7db7615b05f84f221c321365fb1b4d5dcab5955c2bd378660876d045aab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-f2ac44dfe306a4f4b227a6e3d14726f4b023978b0224cc0da09402a4b1c2e27b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-conmon-7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-conmon-3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76c6d475_d05d_4cf4_a311_08c9c7e4f5dc.slice/crio-53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76c6d475_d05d_4cf4_a311_08c9c7e4f5dc.slice/crio-conmon-53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e.scope\": RecentStats: unable to find data in memory cache]" Apr 16 18:11:59.492822 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:11:59.492504 2560 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-f2ac44dfe306a4f4b227a6e3d14726f4b023978b0224cc0da09402a4b1c2e27b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76c6d475_d05d_4cf4_a311_08c9c7e4f5dc.slice/crio-53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-conmon-7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-conmon-3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-994ad7db7615b05f84f221c321365fb1b4d5dcab5955c2bd378660876d045aab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice\": RecentStats: unable to find data in memory cache]" Apr 16 18:11:59.493295 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:11:59.493083 2560 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-994ad7db7615b05f84f221c321365fb1b4d5dcab5955c2bd378660876d045aab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-conmon-7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-conmon-3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76c6d475_d05d_4cf4_a311_08c9c7e4f5dc.slice/crio-conmon-53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76c6d475_d05d_4cf4_a311_08c9c7e4f5dc.slice/crio-53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-f2ac44dfe306a4f4b227a6e3d14726f4b023978b0224cc0da09402a4b1c2e27b\": RecentStats: unable to find data in memory cache]" Apr 16 18:11:59.493725 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:11:59.493499 2560 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76c6d475_d05d_4cf4_a311_08c9c7e4f5dc.slice/crio-53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-conmon-3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-f2ac44dfe306a4f4b227a6e3d14726f4b023978b0224cc0da09402a4b1c2e27b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-994ad7db7615b05f84f221c321365fb1b4d5dcab5955c2bd378660876d045aab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f57269_bb73_4cdd_a856_b36af239dfbc.slice/crio-conmon-7e47b17dcae567fb2da98e0985bf840d72c17d112d6bf9a5aaaf46e199350e53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76c6d475_d05d_4cf4_a311_08c9c7e4f5dc.slice/crio-conmon-53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice/crio-3dff93dcc0a058863b0add8c2813f49390f713c0aa431a337746333b46071ca7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fafe48_5326_4e35_b7f6_8a744d9661dc.slice\": RecentStats: unable to find data in memory cache]" Apr 16 18:11:59.638893 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:59.638869 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:11:59.719391 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:59.719363 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-openshift-service-ca-bundle\") pod \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\" (UID: \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\") " Apr 16 18:11:59.719586 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:59.719445 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-proxy-tls\") pod \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\" (UID: \"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc\") " Apr 16 18:11:59.719783 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:59.719757 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" (UID: "76c6d475-d05d-4cf4-a311-08c9c7e4f5dc"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:11:59.721470 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:59.721446 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" (UID: "76c6d475-d05d-4cf4-a311-08c9c7e4f5dc"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:11:59.820330 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:59.820277 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:11:59.820330 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:11:59.820324 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:12:00.348446 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:00.348409 2560 generic.go:358] "Generic (PLEG): container finished" podID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerID="53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e" exitCode=0 Apr 16 18:12:00.348608 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:00.348479 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" Apr 16 18:12:00.348608 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:00.348500 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" event={"ID":"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc","Type":"ContainerDied","Data":"53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e"} Apr 16 18:12:00.348608 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:00.348540 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz" event={"ID":"76c6d475-d05d-4cf4-a311-08c9c7e4f5dc","Type":"ContainerDied","Data":"70413126eb50e3d17eb137db01823383a8a2804d9c4d1010a037cfc65e460abc"} Apr 16 18:12:00.348608 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:00.348556 2560 scope.go:117] "RemoveContainer" containerID="53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e" Apr 16 18:12:00.356557 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:00.356526 2560 scope.go:117] "RemoveContainer" containerID="53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e" Apr 16 18:12:00.356791 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:12:00.356774 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e\": container with ID starting with 53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e not found: ID does not exist" containerID="53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e" Apr 16 18:12:00.356858 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:00.356799 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e"} err="failed to get container status \"53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e\": rpc error: code = NotFound desc = could not find container \"53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e\": container with ID starting with 53e01574519e3b223f91b11530061f9e86f168fd6b5740d8e17cb46c454d7c6e not found: ID does not exist" Apr 16 18:12:00.374487 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:00.374464 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz"] Apr 16 18:12:00.375824 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:00.375799 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-3b160-555ff4db4b-rhktz"] Apr 16 18:12:01.233334 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:01.233288 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.54:8080: connect: connection refused" Apr 16 18:12:02.162061 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:02.162030 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" path="/var/lib/kubelet/pods/76c6d475-d05d-4cf4-a311-08c9c7e4f5dc/volumes" Apr 16 18:12:03.245577 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:03.245539 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.55:8080: connect: connection refused" Apr 16 18:12:08.753256 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.753216 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4"] Apr 16 18:12:08.753656 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.753560 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" Apr 16 18:12:08.753656 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.753575 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" Apr 16 18:12:08.753656 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.753592 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerName="splitter-graph-3b160" Apr 16 18:12:08.753656 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.753603 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerName="splitter-graph-3b160" Apr 16 18:12:08.753656 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.753628 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" Apr 16 18:12:08.753656 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.753637 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" Apr 16 18:12:08.753917 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.753704 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="16fafe48-5326-4e35-b7f6-8a744d9661dc" containerName="kserve-container" Apr 16 18:12:08.753917 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.753715 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="d3f57269-bb73-4cdd-a856-b36af239dfbc" containerName="kserve-container" Apr 16 18:12:08.753917 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.753721 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="76c6d475-d05d-4cf4-a311-08c9c7e4f5dc" containerName="splitter-graph-3b160" Apr 16 18:12:08.756764 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.756748 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:12:08.758648 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.758614 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-d9a0e-serving-cert\"" Apr 16 18:12:08.758766 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.758658 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-d9a0e-kube-rbac-proxy-sar-config\"" Apr 16 18:12:08.758766 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.758753 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:12:08.765774 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.765754 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4"] Apr 16 18:12:08.796043 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.796014 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-openshift-service-ca-bundle\") pod \"switch-graph-d9a0e-5c6d76d677-vlrj4\" (UID: \"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb\") " pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:12:08.796175 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.796056 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-proxy-tls\") pod \"switch-graph-d9a0e-5c6d76d677-vlrj4\" (UID: \"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb\") " pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:12:08.897125 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.897092 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-openshift-service-ca-bundle\") pod \"switch-graph-d9a0e-5c6d76d677-vlrj4\" (UID: \"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb\") " pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:12:08.897284 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.897136 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-proxy-tls\") pod \"switch-graph-d9a0e-5c6d76d677-vlrj4\" (UID: \"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb\") " pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:12:08.897853 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.897811 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-openshift-service-ca-bundle\") pod \"switch-graph-d9a0e-5c6d76d677-vlrj4\" (UID: \"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb\") " pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:12:08.899533 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:08.899515 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-proxy-tls\") pod \"switch-graph-d9a0e-5c6d76d677-vlrj4\" (UID: \"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb\") " pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:12:09.066806 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:09.066770 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:12:09.194170 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:09.194117 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4"] Apr 16 18:12:09.198007 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:12:09.197975 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2f219bb_bcd5_4424_b5e0_46b8ac5c4dfb.slice/crio-59ba68eaa0b0dc1e112e3e4905c0aeee208e2244209f02c1b7fca0d064bad2d1 WatchSource:0}: Error finding container 59ba68eaa0b0dc1e112e3e4905c0aeee208e2244209f02c1b7fca0d064bad2d1: Status 404 returned error can't find the container with id 59ba68eaa0b0dc1e112e3e4905c0aeee208e2244209f02c1b7fca0d064bad2d1 Apr 16 18:12:09.381241 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:09.381138 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" event={"ID":"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb","Type":"ContainerStarted","Data":"ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87"} Apr 16 18:12:09.381241 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:09.381178 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" event={"ID":"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb","Type":"ContainerStarted","Data":"59ba68eaa0b0dc1e112e3e4905c0aeee208e2244209f02c1b7fca0d064bad2d1"} Apr 16 18:12:09.381241 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:09.381209 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:12:09.401382 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:09.401332 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" podStartSLOduration=1.401319033 podStartE2EDuration="1.401319033s" podCreationTimestamp="2026-04-16 18:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:12:09.398986985 +0000 UTC m=+1897.762195855" watchObservedRunningTime="2026-04-16 18:12:09.401319033 +0000 UTC m=+1897.764527902" Apr 16 18:12:11.233983 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:11.233943 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.54:8080: connect: connection refused" Apr 16 18:12:13.245693 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:13.245649 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.55:8080: connect: connection refused" Apr 16 18:12:15.389612 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:15.389582 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:12:21.235036 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:21.235003 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" Apr 16 18:12:23.247003 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:23.246967 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" Apr 16 18:12:39.764629 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:39.764594 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8"] Apr 16 18:12:39.769411 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:39.769388 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:12:39.772102 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:39.772078 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-c5b1b-kube-rbac-proxy-sar-config\"" Apr 16 18:12:39.772220 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:39.772110 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-c5b1b-serving-cert\"" Apr 16 18:12:39.782339 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:39.782314 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8"] Apr 16 18:12:39.874131 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:39.874087 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29d84e39-e788-45f0-aeca-0504297a705a-proxy-tls\") pod \"splitter-graph-c5b1b-59bd77c885-m9xp8\" (UID: \"29d84e39-e788-45f0-aeca-0504297a705a\") " pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:12:39.874347 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:39.874253 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29d84e39-e788-45f0-aeca-0504297a705a-openshift-service-ca-bundle\") pod \"splitter-graph-c5b1b-59bd77c885-m9xp8\" (UID: \"29d84e39-e788-45f0-aeca-0504297a705a\") " pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:12:39.975539 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:39.975500 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29d84e39-e788-45f0-aeca-0504297a705a-openshift-service-ca-bundle\") pod \"splitter-graph-c5b1b-59bd77c885-m9xp8\" (UID: \"29d84e39-e788-45f0-aeca-0504297a705a\") " pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:12:39.975718 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:39.975575 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29d84e39-e788-45f0-aeca-0504297a705a-proxy-tls\") pod \"splitter-graph-c5b1b-59bd77c885-m9xp8\" (UID: \"29d84e39-e788-45f0-aeca-0504297a705a\") " pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:12:39.975718 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:12:39.975677 2560 secret.go:189] Couldn't get secret kserve-ci-e2e-test/splitter-graph-c5b1b-serving-cert: secret "splitter-graph-c5b1b-serving-cert" not found Apr 16 18:12:39.975803 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:12:39.975743 2560 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29d84e39-e788-45f0-aeca-0504297a705a-proxy-tls podName:29d84e39-e788-45f0-aeca-0504297a705a nodeName:}" failed. No retries permitted until 2026-04-16 18:12:40.475726204 +0000 UTC m=+1928.838935059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/29d84e39-e788-45f0-aeca-0504297a705a-proxy-tls") pod "splitter-graph-c5b1b-59bd77c885-m9xp8" (UID: "29d84e39-e788-45f0-aeca-0504297a705a") : secret "splitter-graph-c5b1b-serving-cert" not found Apr 16 18:12:39.976219 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:39.976196 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29d84e39-e788-45f0-aeca-0504297a705a-openshift-service-ca-bundle\") pod \"splitter-graph-c5b1b-59bd77c885-m9xp8\" (UID: \"29d84e39-e788-45f0-aeca-0504297a705a\") " pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:12:40.481076 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:40.481044 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29d84e39-e788-45f0-aeca-0504297a705a-proxy-tls\") pod \"splitter-graph-c5b1b-59bd77c885-m9xp8\" (UID: \"29d84e39-e788-45f0-aeca-0504297a705a\") " pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:12:40.483498 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:40.483468 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29d84e39-e788-45f0-aeca-0504297a705a-proxy-tls\") pod \"splitter-graph-c5b1b-59bd77c885-m9xp8\" (UID: \"29d84e39-e788-45f0-aeca-0504297a705a\") " pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:12:40.680394 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:40.680354 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:12:40.804522 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:40.804491 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8"] Apr 16 18:12:40.807762 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:12:40.807727 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29d84e39_e788_45f0_aeca_0504297a705a.slice/crio-1ab50012c04d74ae0ffe746485205acce1bb660c67b6e9e34852ca09b360d448 WatchSource:0}: Error finding container 1ab50012c04d74ae0ffe746485205acce1bb660c67b6e9e34852ca09b360d448: Status 404 returned error can't find the container with id 1ab50012c04d74ae0ffe746485205acce1bb660c67b6e9e34852ca09b360d448 Apr 16 18:12:41.483191 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:41.483148 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" event={"ID":"29d84e39-e788-45f0-aeca-0504297a705a","Type":"ContainerStarted","Data":"5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15"} Apr 16 18:12:41.483191 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:41.483191 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" event={"ID":"29d84e39-e788-45f0-aeca-0504297a705a","Type":"ContainerStarted","Data":"1ab50012c04d74ae0ffe746485205acce1bb660c67b6e9e34852ca09b360d448"} Apr 16 18:12:41.483442 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:41.483208 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:12:41.501311 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:41.501254 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" podStartSLOduration=2.501237454 podStartE2EDuration="2.501237454s" podCreationTimestamp="2026-04-16 18:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:12:41.500345321 +0000 UTC m=+1929.863554197" watchObservedRunningTime="2026-04-16 18:12:41.501237454 +0000 UTC m=+1929.864446327" Apr 16 18:12:47.492261 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:12:47.492231 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:20:54.470877 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:54.470778 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8"] Apr 16 18:20:54.471388 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:54.471032 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" podUID="29d84e39-e788-45f0-aeca-0504297a705a" containerName="splitter-graph-c5b1b" containerID="cri-o://5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15" gracePeriod=30 Apr 16 18:20:54.566025 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:54.565995 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw"] Apr 16 18:20:54.566237 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:54.566215 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerName="kserve-container" containerID="cri-o://db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e" gracePeriod=30 Apr 16 18:20:54.604904 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:54.604866 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2"] Apr 16 18:20:54.605141 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:54.605103 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerName="kserve-container" containerID="cri-o://6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee" gracePeriod=30 Apr 16 18:20:57.490909 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:57.490864 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" podUID="29d84e39-e788-45f0-aeca-0504297a705a" containerName="splitter-graph-c5b1b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:20:57.971371 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:57.971344 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" Apr 16 18:20:57.974683 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:57.974659 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" Apr 16 18:20:58.085336 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.085302 2560 generic.go:358] "Generic (PLEG): container finished" podID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerID="6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee" exitCode=0 Apr 16 18:20:58.085526 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.085362 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" Apr 16 18:20:58.085526 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.085384 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" event={"ID":"f82c040f-64ab-4fbf-944e-1bc56cc84fb6","Type":"ContainerDied","Data":"6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee"} Apr 16 18:20:58.085526 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.085421 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2" event={"ID":"f82c040f-64ab-4fbf-944e-1bc56cc84fb6","Type":"ContainerDied","Data":"e89e71589952f1a3168aaef2d99fcb3d721a6f88937ed76d51832ea56b4ebaf1"} Apr 16 18:20:58.085526 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.085440 2560 scope.go:117] "RemoveContainer" containerID="6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee" Apr 16 18:20:58.086546 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.086522 2560 generic.go:358] "Generic (PLEG): container finished" podID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerID="db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e" exitCode=0 Apr 16 18:20:58.086652 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.086551 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" event={"ID":"a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa","Type":"ContainerDied","Data":"db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e"} Apr 16 18:20:58.086652 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.086578 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" Apr 16 18:20:58.086652 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.086593 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw" event={"ID":"a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa","Type":"ContainerDied","Data":"b90960a8d2cc07c530335d37a0e95d546d02b57778c1d4c83cca2e6157670683"} Apr 16 18:20:58.093722 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.093707 2560 scope.go:117] "RemoveContainer" containerID="6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee" Apr 16 18:20:58.094096 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:20:58.094068 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee\": container with ID starting with 6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee not found: ID does not exist" containerID="6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee" Apr 16 18:20:58.094217 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.094104 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee"} err="failed to get container status \"6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee\": rpc error: code = NotFound desc = could not find container \"6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee\": container with ID starting with 6eb89158e0672f46504be3a857b9bda869771a6080b444cda674efb0cbefa1ee not found: ID does not exist" Apr 16 18:20:58.094217 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.094123 2560 scope.go:117] "RemoveContainer" containerID="db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e" Apr 16 18:20:58.101496 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.101472 2560 scope.go:117] "RemoveContainer" containerID="db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e" Apr 16 18:20:58.101758 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:20:58.101739 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e\": container with ID starting with db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e not found: ID does not exist" containerID="db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e" Apr 16 18:20:58.101847 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.101766 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e"} err="failed to get container status \"db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e\": rpc error: code = NotFound desc = could not find container \"db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e\": container with ID starting with db305068fc5fbbabae25bbcac719300b2d4ee43028f7104148d844ac8c81cb0e not found: ID does not exist" Apr 16 18:20:58.112959 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.112930 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw"] Apr 16 18:20:58.121210 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.121188 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-c5b1b-predictor-5854f8b788-s9zzw"] Apr 16 18:20:58.132515 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.132487 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2"] Apr 16 18:20:58.137487 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.137462 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-c5b1b-predictor-78c899d8d5-r45k2"] Apr 16 18:20:58.162127 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.162089 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" path="/var/lib/kubelet/pods/a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa/volumes" Apr 16 18:20:58.162339 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:20:58.162326 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" path="/var/lib/kubelet/pods/f82c040f-64ab-4fbf-944e-1bc56cc84fb6/volumes" Apr 16 18:21:02.490515 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:02.490473 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" podUID="29d84e39-e788-45f0-aeca-0504297a705a" containerName="splitter-graph-c5b1b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:21:07.490459 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:07.490413 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" podUID="29d84e39-e788-45f0-aeca-0504297a705a" containerName="splitter-graph-c5b1b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:21:07.490886 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:07.490519 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:21:12.491258 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:12.491212 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" podUID="29d84e39-e788-45f0-aeca-0504297a705a" containerName="splitter-graph-c5b1b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:21:17.490996 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:17.490951 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" podUID="29d84e39-e788-45f0-aeca-0504297a705a" containerName="splitter-graph-c5b1b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:21:22.490980 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:22.490941 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" podUID="29d84e39-e788-45f0-aeca-0504297a705a" containerName="splitter-graph-c5b1b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:21:24.621044 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:24.621015 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:21:24.675266 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:24.675232 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29d84e39-e788-45f0-aeca-0504297a705a-openshift-service-ca-bundle\") pod \"29d84e39-e788-45f0-aeca-0504297a705a\" (UID: \"29d84e39-e788-45f0-aeca-0504297a705a\") " Apr 16 18:21:24.675437 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:24.675322 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29d84e39-e788-45f0-aeca-0504297a705a-proxy-tls\") pod \"29d84e39-e788-45f0-aeca-0504297a705a\" (UID: \"29d84e39-e788-45f0-aeca-0504297a705a\") " Apr 16 18:21:24.675614 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:24.675589 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d84e39-e788-45f0-aeca-0504297a705a-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "29d84e39-e788-45f0-aeca-0504297a705a" (UID: "29d84e39-e788-45f0-aeca-0504297a705a"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:21:24.677460 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:24.677437 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d84e39-e788-45f0-aeca-0504297a705a-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "29d84e39-e788-45f0-aeca-0504297a705a" (UID: "29d84e39-e788-45f0-aeca-0504297a705a"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:21:24.776380 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:24.776349 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29d84e39-e788-45f0-aeca-0504297a705a-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:21:24.776380 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:24.776381 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29d84e39-e788-45f0-aeca-0504297a705a-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:21:25.175021 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:25.174914 2560 generic.go:358] "Generic (PLEG): container finished" podID="29d84e39-e788-45f0-aeca-0504297a705a" containerID="5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15" exitCode=0 Apr 16 18:21:25.175021 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:25.174974 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" Apr 16 18:21:25.175021 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:25.175003 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" event={"ID":"29d84e39-e788-45f0-aeca-0504297a705a","Type":"ContainerDied","Data":"5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15"} Apr 16 18:21:25.175310 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:25.175042 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8" event={"ID":"29d84e39-e788-45f0-aeca-0504297a705a","Type":"ContainerDied","Data":"1ab50012c04d74ae0ffe746485205acce1bb660c67b6e9e34852ca09b360d448"} Apr 16 18:21:25.175310 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:25.175057 2560 scope.go:117] "RemoveContainer" containerID="5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15" Apr 16 18:21:25.183805 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:25.183786 2560 scope.go:117] "RemoveContainer" containerID="5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15" Apr 16 18:21:25.184117 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:21:25.184098 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15\": container with ID starting with 5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15 not found: ID does not exist" containerID="5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15" Apr 16 18:21:25.184201 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:25.184127 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15"} err="failed to get container status \"5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15\": rpc error: code = NotFound desc = could not find container \"5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15\": container with ID starting with 5ce37634f397454d5bb4b26c3978c6b0cfe7e8d784dc227319b87b8484db4c15 not found: ID does not exist" Apr 16 18:21:25.198634 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:25.198599 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8"] Apr 16 18:21:25.203929 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:25.203904 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-c5b1b-59bd77c885-m9xp8"] Apr 16 18:21:26.161958 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:21:26.161925 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d84e39-e788-45f0-aeca-0504297a705a" path="/var/lib/kubelet/pods/29d84e39-e788-45f0-aeca-0504297a705a/volumes" Apr 16 18:28:28.170339 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:28.170258 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4"] Apr 16 18:28:28.170875 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:28.170505 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerName="switch-graph-d9a0e" containerID="cri-o://ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87" gracePeriod=30 Apr 16 18:28:28.255800 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:28.255763 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7"] Apr 16 18:28:28.256114 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:28.256081 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerName="kserve-container" containerID="cri-o://af6b67c862b5b53ad6f04556b3e983c618de4df640e63e2164d7e66f6827f96b" gracePeriod=30 Apr 16 18:28:28.308071 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:28.308038 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j"] Apr 16 18:28:28.308311 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:28.308272 2560 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" containerID="cri-o://e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a" gracePeriod=30 Apr 16 18:28:30.121295 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:30.121253 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.51:8080: connect: connection refused" Apr 16 18:28:30.387992 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:30.387905 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerName="switch-graph-d9a0e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:31.388097 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.388073 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" Apr 16 18:28:31.437034 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.436996 2560 generic.go:358] "Generic (PLEG): container finished" podID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerID="af6b67c862b5b53ad6f04556b3e983c618de4df640e63e2164d7e66f6827f96b" exitCode=0 Apr 16 18:28:31.437204 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.437077 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" event={"ID":"985ae6ae-2c9a-4dfa-a0fa-691306696ef9","Type":"ContainerDied","Data":"af6b67c862b5b53ad6f04556b3e983c618de4df640e63e2164d7e66f6827f96b"} Apr 16 18:28:31.438250 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.438196 2560 generic.go:358] "Generic (PLEG): container finished" podID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerID="e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a" exitCode=0 Apr 16 18:28:31.438377 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.438252 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" event={"ID":"3cafcbbe-191d-4b2c-9512-ad9e7c778f76","Type":"ContainerDied","Data":"e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a"} Apr 16 18:28:31.438377 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.438273 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" Apr 16 18:28:31.438377 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.438291 2560 scope.go:117] "RemoveContainer" containerID="e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a" Apr 16 18:28:31.438540 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.438276 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j" event={"ID":"3cafcbbe-191d-4b2c-9512-ad9e7c778f76","Type":"ContainerDied","Data":"f172b827ed2bd958e6cbc08dfc234968a12cf18bebb41231d0609f12b5c77ef6"} Apr 16 18:28:31.448136 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.447999 2560 scope.go:117] "RemoveContainer" containerID="e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a" Apr 16 18:28:31.448473 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:28:31.448336 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a\": container with ID starting with e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a not found: ID does not exist" containerID="e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a" Apr 16 18:28:31.448473 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.448378 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a"} err="failed to get container status \"e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a\": rpc error: code = NotFound desc = could not find container \"e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a\": container with ID starting with e00b9efc13114369b6883de9bffdaf8c6423b5a4997c8e6a2c97ef6f4302c98a not found: ID does not exist" Apr 16 18:28:31.463318 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.463245 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j"] Apr 16 18:28:31.467089 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.467063 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d9a0e-predictor-8587d49fdd-wpj2j"] Apr 16 18:28:31.502059 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:31.502036 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" Apr 16 18:28:32.161568 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:32.161532 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" path="/var/lib/kubelet/pods/3cafcbbe-191d-4b2c-9512-ad9e7c778f76/volumes" Apr 16 18:28:32.396183 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:32.396153 2560 scope.go:117] "RemoveContainer" containerID="af6b67c862b5b53ad6f04556b3e983c618de4df640e63e2164d7e66f6827f96b" Apr 16 18:28:32.441073 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:32.440989 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" event={"ID":"985ae6ae-2c9a-4dfa-a0fa-691306696ef9","Type":"ContainerDied","Data":"e26951a7ec98fabaaf7b6a30c760e8628eb0e5b73bd45bad127fe205393f59f5"} Apr 16 18:28:32.441073 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:32.441012 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7" Apr 16 18:28:32.471580 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:32.471551 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7"] Apr 16 18:28:32.475417 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:32.475389 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d9a0e-predictor-8b7fd8fb9-nvzd7"] Apr 16 18:28:34.162016 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:34.161979 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" path="/var/lib/kubelet/pods/985ae6ae-2c9a-4dfa-a0fa-691306696ef9/volumes" Apr 16 18:28:35.388281 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:35.388236 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerName="switch-graph-d9a0e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:40.388722 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:40.388680 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerName="switch-graph-d9a0e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:40.389144 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:40.388801 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:28:42.776693 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:42.776664 2560 ???:1] "http: TLS handshake error from 10.0.133.244:59074: EOF" Apr 16 18:28:42.779847 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:42.779813 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:43.542965 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:43.542938 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:44.302595 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:44.302557 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:45.074576 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:45.074541 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:45.388348 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:45.388259 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerName="switch-graph-d9a0e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:45.814982 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:45.814946 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:46.565489 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:46.565460 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:47.350222 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:47.350186 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:48.082467 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:48.082434 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:48.805733 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:48.805693 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:49.533530 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:49.533499 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:50.269018 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:50.268987 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:50.387632 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:50.387597 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerName="switch-graph-d9a0e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:51.150359 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:51.150331 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-d9a0e-5c6d76d677-vlrj4_e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/switch-graph-d9a0e/0.log" Apr 16 18:28:55.388425 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:55.388385 2560 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerName="switch-graph-d9a0e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:56.365538 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:56.365502 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-hzdjn_0fb1c6cc-5a6d-4f3f-95f2-3f46be10eda5/global-pull-secret-syncer/0.log" Apr 16 18:28:56.469032 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:56.468995 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-nppj4_739c3ada-3675-42f5-afa7-51eba65d8c7e/konnectivity-agent/0.log" Apr 16 18:28:56.628890 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:56.628769 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-134-233.ec2.internal_dbf1483115d1b5ae94f569f1ec8a827f/haproxy/0.log" Apr 16 18:28:58.306222 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.306195 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:28:58.464802 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.464689 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-openshift-service-ca-bundle\") pod \"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb\" (UID: \"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb\") " Apr 16 18:28:58.464802 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.464742 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-proxy-tls\") pod \"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb\" (UID: \"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb\") " Apr 16 18:28:58.465190 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.465159 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" (UID: "e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:28:58.466877 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.466855 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" (UID: "e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:28:58.518585 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.518555 2560 generic.go:358] "Generic (PLEG): container finished" podID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerID="ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87" exitCode=0 Apr 16 18:28:58.518794 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.518619 2560 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" Apr 16 18:28:58.518794 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.518645 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" event={"ID":"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb","Type":"ContainerDied","Data":"ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87"} Apr 16 18:28:58.518794 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.518690 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4" event={"ID":"e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb","Type":"ContainerDied","Data":"59ba68eaa0b0dc1e112e3e4905c0aeee208e2244209f02c1b7fca0d064bad2d1"} Apr 16 18:28:58.518794 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.518711 2560 scope.go:117] "RemoveContainer" containerID="ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87" Apr 16 18:28:58.527142 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.527117 2560 scope.go:117] "RemoveContainer" containerID="ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87" Apr 16 18:28:58.527419 ip-10-0-134-233 kubenswrapper[2560]: E0416 18:28:58.527395 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87\": container with ID starting with ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87 not found: ID does not exist" containerID="ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87" Apr 16 18:28:58.527477 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.527429 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87"} err="failed to get container status \"ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87\": rpc error: code = NotFound desc = could not find container \"ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87\": container with ID starting with ca1253de78d5d85a4bef38cc89557ec3e5e63d67624c67d16b17ac5405745d87 not found: ID does not exist" Apr 16 18:28:58.541469 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.541438 2560 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4"] Apr 16 18:28:58.546483 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.546457 2560 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d9a0e-5c6d76d677-vlrj4"] Apr 16 18:28:58.565643 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.565613 2560 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-proxy-tls\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:28:58.565643 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:58.565645 2560 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb-openshift-service-ca-bundle\") on node \"ip-10-0-134-233.ec2.internal\" DevicePath \"\"" Apr 16 18:28:59.880967 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:59.880940 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_5c6e5713-9364-457b-a7c0-83a04ea458a8/alertmanager/0.log" Apr 16 18:28:59.905755 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:59.905721 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_5c6e5713-9364-457b-a7c0-83a04ea458a8/config-reloader/0.log" Apr 16 18:28:59.936934 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:59.936902 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_5c6e5713-9364-457b-a7c0-83a04ea458a8/kube-rbac-proxy-web/0.log" Apr 16 18:28:59.961396 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:59.961366 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_5c6e5713-9364-457b-a7c0-83a04ea458a8/kube-rbac-proxy/0.log" Apr 16 18:28:59.984664 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:28:59.984635 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_5c6e5713-9364-457b-a7c0-83a04ea458a8/kube-rbac-proxy-metric/0.log" Apr 16 18:29:00.006459 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.006430 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_5c6e5713-9364-457b-a7c0-83a04ea458a8/prom-label-proxy/0.log" Apr 16 18:29:00.031500 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.031448 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_5c6e5713-9364-457b-a7c0-83a04ea458a8/init-config-reloader/0.log" Apr 16 18:29:00.110855 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.110813 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7479c89684-fh86k_8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408/kube-state-metrics/0.log" Apr 16 18:29:00.135553 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.135465 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7479c89684-fh86k_8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408/kube-rbac-proxy-main/0.log" Apr 16 18:29:00.160984 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.160951 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7479c89684-fh86k_8e78c1ad-aeca-46d0-8ac7-d3fb5c6b4408/kube-rbac-proxy-self/0.log" Apr 16 18:29:00.163084 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.163060 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" path="/var/lib/kubelet/pods/e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb/volumes" Apr 16 18:29:00.442568 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.442482 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-vglhr_c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5/node-exporter/0.log" Apr 16 18:29:00.466780 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.466744 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-vglhr_c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5/kube-rbac-proxy/0.log" Apr 16 18:29:00.485987 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.485962 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-vglhr_c0b5d7a1-ed7b-42bb-9eaf-53320eb9c1c5/init-textfile/0.log" Apr 16 18:29:00.514736 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.514702 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5669946b84-n8w5r_93b79d64-b19d-4b29-820a-e2c33293ae57/kube-rbac-proxy-main/0.log" Apr 16 18:29:00.545290 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.545258 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5669946b84-n8w5r_93b79d64-b19d-4b29-820a-e2c33293ae57/kube-rbac-proxy-self/0.log" Apr 16 18:29:00.570376 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.570334 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5669946b84-n8w5r_93b79d64-b19d-4b29-820a-e2c33293ae57/openshift-state-metrics/0.log" Apr 16 18:29:00.906416 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.906387 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5fcf864b6-pt2gx_218610e5-804a-40fa-8abb-8b62570db501/telemeter-client/0.log" Apr 16 18:29:00.937870 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.937809 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5fcf864b6-pt2gx_218610e5-804a-40fa-8abb-8b62570db501/reload/0.log" Apr 16 18:29:00.968107 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:00.968070 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5fcf864b6-pt2gx_218610e5-804a-40fa-8abb-8b62570db501/kube-rbac-proxy/0.log" Apr 16 18:29:03.273970 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.273944 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7d49b764cf-d8tht_9f7d644b-bdf4-431e-acdc-20b326ef09bc/console/0.log" Apr 16 18:29:03.653174 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653140 2560 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s"] Apr 16 18:29:03.653558 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653539 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29d84e39-e788-45f0-aeca-0504297a705a" containerName="splitter-graph-c5b1b" Apr 16 18:29:03.653654 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653561 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d84e39-e788-45f0-aeca-0504297a705a" containerName="splitter-graph-c5b1b" Apr 16 18:29:03.653654 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653588 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerName="kserve-container" Apr 16 18:29:03.653654 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653596 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerName="kserve-container" Apr 16 18:29:03.653654 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653610 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" Apr 16 18:29:03.653654 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653619 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" Apr 16 18:29:03.653654 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653639 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerName="kserve-container" Apr 16 18:29:03.653654 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653647 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerName="kserve-container" Apr 16 18:29:03.654101 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653667 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerName="kserve-container" Apr 16 18:29:03.654101 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653676 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerName="kserve-container" Apr 16 18:29:03.654101 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653688 2560 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerName="switch-graph-d9a0e" Apr 16 18:29:03.654101 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653696 2560 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerName="switch-graph-d9a0e" Apr 16 18:29:03.654101 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653816 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2f219bb-bcd5-4424-b5e0-46b8ac5c4dfb" containerName="switch-graph-d9a0e" Apr 16 18:29:03.654101 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653846 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="29d84e39-e788-45f0-aeca-0504297a705a" containerName="splitter-graph-c5b1b" Apr 16 18:29:03.654101 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653858 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="985ae6ae-2c9a-4dfa-a0fa-691306696ef9" containerName="kserve-container" Apr 16 18:29:03.654101 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653872 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="3cafcbbe-191d-4b2c-9512-ad9e7c778f76" containerName="kserve-container" Apr 16 18:29:03.654101 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653885 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="a7f6a8ed-ac6d-4bc2-9448-deec62a9beaa" containerName="kserve-container" Apr 16 18:29:03.654101 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.653895 2560 memory_manager.go:356] "RemoveStaleState removing state" podUID="f82c040f-64ab-4fbf-944e-1bc56cc84fb6" containerName="kserve-container" Apr 16 18:29:03.658627 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.658602 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.660775 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.660753 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wm28s\"/\"kube-root-ca.crt\"" Apr 16 18:29:03.660911 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.660790 2560 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-wm28s\"/\"default-dockercfg-ws4vp\"" Apr 16 18:29:03.660911 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.660790 2560 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wm28s\"/\"openshift-service-ca.crt\"" Apr 16 18:29:03.663919 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.663895 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s"] Apr 16 18:29:03.809865 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.809794 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-lib-modules\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.809865 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.809862 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-podres\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.810104 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.809946 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-sys\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.810104 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.809988 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-proc\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.810104 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.810043 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ztvw\" (UniqueName: \"kubernetes.io/projected/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-kube-api-access-6ztvw\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.911384 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.911305 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-lib-modules\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.911384 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.911344 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-podres\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.911574 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.911385 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-sys\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.911574 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.911418 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-proc\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.911574 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.911461 2560 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6ztvw\" (UniqueName: \"kubernetes.io/projected/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-kube-api-access-6ztvw\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.911574 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.911501 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-lib-modules\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.911574 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.911513 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-sys\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.911574 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.911536 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-podres\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.911574 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.911533 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-proc\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.920236 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.920210 2560 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ztvw\" (UniqueName: \"kubernetes.io/projected/25b4c4db-5190-4a2a-8b25-4d3e32bac7c7-kube-api-access-6ztvw\") pod \"perf-node-gather-daemonset-zqn2s\" (UID: \"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7\") " pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:03.969169 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:03.969121 2560 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:04.094227 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:04.094192 2560 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s"] Apr 16 18:29:04.097673 ip-10-0-134-233 kubenswrapper[2560]: W0416 18:29:04.097634 2560 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod25b4c4db_5190_4a2a_8b25_4d3e32bac7c7.slice/crio-6c55de9e4abb8be76f3abe13172af856cc9822cf591a82bd4fd50cc11c37ef85 WatchSource:0}: Error finding container 6c55de9e4abb8be76f3abe13172af856cc9822cf591a82bd4fd50cc11c37ef85: Status 404 returned error can't find the container with id 6c55de9e4abb8be76f3abe13172af856cc9822cf591a82bd4fd50cc11c37ef85 Apr 16 18:29:04.099654 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:04.099633 2560 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:29:04.536818 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:04.536778 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" event={"ID":"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7","Type":"ContainerStarted","Data":"f51c4b4e58d626ac9d2f0d270cb8a492d6f1b77a06c772f8aab5c7b38b8674fd"} Apr 16 18:29:04.536818 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:04.536820 2560 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" event={"ID":"25b4c4db-5190-4a2a-8b25-4d3e32bac7c7","Type":"ContainerStarted","Data":"6c55de9e4abb8be76f3abe13172af856cc9822cf591a82bd4fd50cc11c37ef85"} Apr 16 18:29:04.537271 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:04.536940 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:04.543542 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:04.543513 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-sfb6x_22e883fa-c3e0-4a77-ab30-0e3840eab93d/dns/0.log" Apr 16 18:29:04.555084 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:04.555042 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" podStartSLOduration=1.555019616 podStartE2EDuration="1.555019616s" podCreationTimestamp="2026-04-16 18:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:29:04.553317638 +0000 UTC m=+2912.916526511" watchObservedRunningTime="2026-04-16 18:29:04.555019616 +0000 UTC m=+2912.918228529" Apr 16 18:29:04.564224 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:04.564192 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-sfb6x_22e883fa-c3e0-4a77-ab30-0e3840eab93d/kube-rbac-proxy/0.log" Apr 16 18:29:04.645362 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:04.645328 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-lh7ql_d991b7f2-6a61-4e7d-aa21-5a9bfbd3542e/dns-node-resolver/0.log" Apr 16 18:29:05.089816 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:05.089781 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-c8cf4fc8d-nd8l8_50e70fc1-7fd4-4977-949f-deb61937aea3/registry/0.log" Apr 16 18:29:05.138057 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:05.138014 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-m85xh_455194d4-fde7-420d-8c1d-1e43000eb0a3/node-ca/0.log" Apr 16 18:29:06.403576 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:06.403547 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-x6fv7_26138095-7f4c-401b-ac1c-5d1d5e047af0/serve-healthcheck-canary/0.log" Apr 16 18:29:06.968886 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:06.968856 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-v5h9l_0398aee0-b61d-4565-b06a-f1ee93b347aa/kube-rbac-proxy/0.log" Apr 16 18:29:06.988665 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:06.988634 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-v5h9l_0398aee0-b61d-4565-b06a-f1ee93b347aa/exporter/0.log" Apr 16 18:29:07.009241 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:07.009215 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-v5h9l_0398aee0-b61d-4565-b06a-f1ee93b347aa/extractor/0.log" Apr 16 18:29:09.007963 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:09.007926 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_kserve-controller-manager-7f8f4564d-mf4p9_065e4f24-ec52-416c-b2bc-0b390a2cca88/manager/0.log" Apr 16 18:29:09.057636 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:09.057608 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_model-serving-api-86f7b4b499-xh5jq_cf9e0a47-57c9-43b5-9cb9-26f3f33bc4a7/server/0.log" Apr 16 18:29:10.549654 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:10.549627 2560 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-wm28s/perf-node-gather-daemonset-zqn2s" Apr 16 18:29:15.168972 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:15.168940 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-6fzxn_6b045243-0c99-4991-8719-5efd0f27a340/kube-multus-additional-cni-plugins/0.log" Apr 16 18:29:15.195539 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:15.195505 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-6fzxn_6b045243-0c99-4991-8719-5efd0f27a340/egress-router-binary-copy/0.log" Apr 16 18:29:15.219562 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:15.219532 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-6fzxn_6b045243-0c99-4991-8719-5efd0f27a340/cni-plugins/0.log" Apr 16 18:29:15.243471 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:15.243437 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-6fzxn_6b045243-0c99-4991-8719-5efd0f27a340/bond-cni-plugin/0.log" Apr 16 18:29:15.266644 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:15.266604 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-6fzxn_6b045243-0c99-4991-8719-5efd0f27a340/routeoverride-cni/0.log" Apr 16 18:29:15.288601 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:15.288561 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-6fzxn_6b045243-0c99-4991-8719-5efd0f27a340/whereabouts-cni-bincopy/0.log" Apr 16 18:29:15.309060 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:15.309025 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-6fzxn_6b045243-0c99-4991-8719-5efd0f27a340/whereabouts-cni/0.log" Apr 16 18:29:15.527958 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:15.527926 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-trqz7_f4e77261-e614-4a80-bbc5-28200547728b/kube-multus/0.log" Apr 16 18:29:15.633680 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:15.633639 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-lx5nt_47012ffa-3deb-41b8-b770-fc4db562d87e/network-metrics-daemon/0.log" Apr 16 18:29:15.650961 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:15.650932 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-lx5nt_47012ffa-3deb-41b8-b770-fc4db562d87e/kube-rbac-proxy/0.log" Apr 16 18:29:16.490130 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:16.490100 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-94t5h_5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638/ovn-controller/0.log" Apr 16 18:29:16.551768 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:16.551736 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-94t5h_5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638/ovn-acl-logging/0.log" Apr 16 18:29:16.578962 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:16.578934 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-94t5h_5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638/kube-rbac-proxy-node/0.log" Apr 16 18:29:16.602139 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:16.602102 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-94t5h_5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638/kube-rbac-proxy-ovn-metrics/0.log" Apr 16 18:29:16.626656 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:16.626619 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-94t5h_5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638/northd/0.log" Apr 16 18:29:16.652222 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:16.652192 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-94t5h_5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638/nbdb/0.log" Apr 16 18:29:16.674615 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:16.674582 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-94t5h_5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638/sbdb/0.log" Apr 16 18:29:16.868777 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:16.868742 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-94t5h_5a9c6eb5-bf25-4c2f-aa81-042b6e2bd638/ovnkube-controller/0.log" Apr 16 18:29:18.916897 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:18.916874 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-cszgd_29dc29ef-4848-44b6-bfa3-4a7545e874ce/network-check-target-container/0.log" Apr 16 18:29:19.977579 ip-10-0-134-233 kubenswrapper[2560]: I0416 18:29:19.977543 2560 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-zmp7s_5d3784fe-3481-43df-9a89-dda624c566b8/iptables-alerter/0.log"