Apr 23 17:49:39.418438 ip-10-0-136-172 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 23 17:49:39.418454 ip-10-0-136-172 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 23 17:49:39.418464 ip-10-0-136-172 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 23 17:49:39.418799 ip-10-0-136-172 systemd[1]: Failed to start Kubernetes Kubelet. Apr 23 17:49:49.462328 ip-10-0-136-172 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 23 17:49:49.462351 ip-10-0-136-172 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot d4eb942ac58f4032afed32d7ee6011a4 -- Apr 23 17:52:20.592623 ip-10-0-136-172 systemd[1]: Starting Kubernetes Kubelet... Apr 23 17:52:20.998913 ip-10-0-136-172 kubenswrapper[2566]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:20.998913 ip-10-0-136-172 kubenswrapper[2566]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 23 17:52:20.998913 ip-10-0-136-172 kubenswrapper[2566]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:20.998913 ip-10-0-136-172 kubenswrapper[2566]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 17:52:20.998913 ip-10-0-136-172 kubenswrapper[2566]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:21.000279 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.000191 2566 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 17:52:21.004279 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004263 2566 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:21.004279 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004279 2566 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004284 2566 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004287 2566 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004290 2566 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004293 2566 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004295 2566 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004312 2566 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004315 2566 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004318 2566 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004321 2566 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004324 2566 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004327 2566 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004329 2566 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004332 2566 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004334 2566 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004337 2566 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004340 2566 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004343 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004346 2566 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:21.004445 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004349 2566 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004351 2566 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004354 2566 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004357 2566 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004359 2566 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004362 2566 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004365 2566 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004368 2566 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004370 2566 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004373 2566 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004375 2566 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004378 2566 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004380 2566 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004383 2566 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004385 2566 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004390 2566 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004393 2566 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004396 2566 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004398 2566 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:21.004943 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004401 2566 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004403 2566 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004406 2566 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004408 2566 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004411 2566 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004413 2566 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004416 2566 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004418 2566 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004420 2566 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004423 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004425 2566 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004428 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004431 2566 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004433 2566 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004435 2566 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004438 2566 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004440 2566 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004442 2566 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004445 2566 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004447 2566 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:21.005434 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004451 2566 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004453 2566 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004456 2566 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004458 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004461 2566 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004463 2566 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004466 2566 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004469 2566 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004472 2566 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004475 2566 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004477 2566 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004480 2566 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004482 2566 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004485 2566 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004487 2566 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004489 2566 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004492 2566 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004496 2566 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004498 2566 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004500 2566 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:21.005913 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004505 2566 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004509 2566 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004513 2566 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004516 2566 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004518 2566 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004521 2566 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.004523 2566 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005523 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005530 2566 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005533 2566 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005536 2566 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005538 2566 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005542 2566 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005545 2566 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005548 2566 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005550 2566 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005553 2566 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005556 2566 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005558 2566 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005561 2566 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:21.006406 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005564 2566 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005566 2566 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005569 2566 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005571 2566 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005574 2566 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005576 2566 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005579 2566 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005581 2566 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005584 2566 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005587 2566 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005589 2566 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005592 2566 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005594 2566 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005597 2566 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005599 2566 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005602 2566 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005604 2566 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005607 2566 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005609 2566 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005612 2566 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:21.006882 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005615 2566 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005619 2566 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005622 2566 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005625 2566 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005629 2566 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005632 2566 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005635 2566 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005637 2566 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005640 2566 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005642 2566 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005645 2566 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005647 2566 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005650 2566 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005653 2566 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005655 2566 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005658 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005660 2566 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005663 2566 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005665 2566 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005667 2566 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:21.007396 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005670 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005672 2566 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005675 2566 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005677 2566 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005679 2566 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005682 2566 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005684 2566 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005687 2566 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005689 2566 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005691 2566 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005694 2566 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005696 2566 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005698 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005701 2566 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005704 2566 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005706 2566 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005709 2566 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005711 2566 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005714 2566 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005717 2566 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:21.007889 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005720 2566 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005722 2566 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005724 2566 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005727 2566 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005730 2566 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005733 2566 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005735 2566 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005738 2566 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005741 2566 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005743 2566 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005746 2566 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005748 2566 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.005752 2566 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005828 2566 flags.go:64] FLAG: --address="0.0.0.0" Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005836 2566 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005843 2566 flags.go:64] FLAG: --anonymous-auth="true" Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005850 2566 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005856 2566 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005861 2566 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005868 2566 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005873 2566 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 23 17:52:21.008418 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005876 2566 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005879 2566 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005883 2566 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005886 2566 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005889 2566 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005892 2566 flags.go:64] FLAG: --cgroup-root="" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005895 2566 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005898 2566 flags.go:64] FLAG: --client-ca-file="" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005900 2566 flags.go:64] FLAG: --cloud-config="" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005903 2566 flags.go:64] FLAG: --cloud-provider="external" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005907 2566 flags.go:64] FLAG: --cluster-dns="[]" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005912 2566 flags.go:64] FLAG: --cluster-domain="" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005916 2566 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005919 2566 flags.go:64] FLAG: --config-dir="" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005921 2566 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005925 2566 flags.go:64] FLAG: --container-log-max-files="5" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005930 2566 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005933 2566 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005936 2566 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005939 2566 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005942 2566 flags.go:64] FLAG: --contention-profiling="false" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005945 2566 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005948 2566 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005951 2566 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005955 2566 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 23 17:52:21.008947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005959 2566 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005962 2566 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005965 2566 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005967 2566 flags.go:64] FLAG: --enable-load-reader="false" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005970 2566 flags.go:64] FLAG: --enable-server="true" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005973 2566 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005977 2566 flags.go:64] FLAG: --event-burst="100" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005980 2566 flags.go:64] FLAG: --event-qps="50" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005983 2566 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005986 2566 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005989 2566 flags.go:64] FLAG: --eviction-hard="" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005993 2566 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005995 2566 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.005998 2566 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006001 2566 flags.go:64] FLAG: --eviction-soft="" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006004 2566 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006007 2566 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006010 2566 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006013 2566 flags.go:64] FLAG: --experimental-mounter-path="" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006016 2566 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006019 2566 flags.go:64] FLAG: --fail-swap-on="true" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006021 2566 flags.go:64] FLAG: --feature-gates="" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006026 2566 flags.go:64] FLAG: --file-check-frequency="20s" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006029 2566 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006032 2566 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 23 17:52:21.009575 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006035 2566 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006038 2566 flags.go:64] FLAG: --healthz-port="10248" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006041 2566 flags.go:64] FLAG: --help="false" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006044 2566 flags.go:64] FLAG: --hostname-override="ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006047 2566 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006050 2566 flags.go:64] FLAG: --http-check-frequency="20s" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006053 2566 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006056 2566 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006060 2566 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006062 2566 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006065 2566 flags.go:64] FLAG: --image-service-endpoint="" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006068 2566 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006071 2566 flags.go:64] FLAG: --kube-api-burst="100" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006074 2566 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006077 2566 flags.go:64] FLAG: --kube-api-qps="50" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006079 2566 flags.go:64] FLAG: --kube-reserved="" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006082 2566 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006085 2566 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006088 2566 flags.go:64] FLAG: --kubelet-cgroups="" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006091 2566 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006093 2566 flags.go:64] FLAG: --lock-file="" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006096 2566 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006099 2566 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006102 2566 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 23 17:52:21.010211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006107 2566 flags.go:64] FLAG: --log-json-split-stream="false" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006109 2566 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006113 2566 flags.go:64] FLAG: --log-text-split-stream="false" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006115 2566 flags.go:64] FLAG: --logging-format="text" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006118 2566 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006122 2566 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006125 2566 flags.go:64] FLAG: --manifest-url="" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006128 2566 flags.go:64] FLAG: --manifest-url-header="" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006132 2566 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006135 2566 flags.go:64] FLAG: --max-open-files="1000000" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006139 2566 flags.go:64] FLAG: --max-pods="110" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006142 2566 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006145 2566 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006148 2566 flags.go:64] FLAG: --memory-manager-policy="None" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006153 2566 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006156 2566 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006159 2566 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006162 2566 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006169 2566 flags.go:64] FLAG: --node-status-max-images="50" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006173 2566 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006176 2566 flags.go:64] FLAG: --oom-score-adj="-999" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006179 2566 flags.go:64] FLAG: --pod-cidr="" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006182 2566 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 23 17:52:21.010809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006188 2566 flags.go:64] FLAG: --pod-manifest-path="" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006191 2566 flags.go:64] FLAG: --pod-max-pids="-1" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006194 2566 flags.go:64] FLAG: --pods-per-core="0" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006197 2566 flags.go:64] FLAG: --port="10250" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006199 2566 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006202 2566 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0980b606c8ae10cad" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006206 2566 flags.go:64] FLAG: --qos-reserved="" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006208 2566 flags.go:64] FLAG: --read-only-port="10255" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006211 2566 flags.go:64] FLAG: --register-node="true" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006214 2566 flags.go:64] FLAG: --register-schedulable="true" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006217 2566 flags.go:64] FLAG: --register-with-taints="" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006220 2566 flags.go:64] FLAG: --registry-burst="10" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006224 2566 flags.go:64] FLAG: --registry-qps="5" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006226 2566 flags.go:64] FLAG: --reserved-cpus="" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006229 2566 flags.go:64] FLAG: --reserved-memory="" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006233 2566 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006236 2566 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006239 2566 flags.go:64] FLAG: --rotate-certificates="false" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006242 2566 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006245 2566 flags.go:64] FLAG: --runonce="false" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006248 2566 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006251 2566 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006254 2566 flags.go:64] FLAG: --seccomp-default="false" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006258 2566 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006261 2566 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006264 2566 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 23 17:52:21.011385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006267 2566 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006270 2566 flags.go:64] FLAG: --storage-driver-password="root" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006273 2566 flags.go:64] FLAG: --storage-driver-secure="false" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006276 2566 flags.go:64] FLAG: --storage-driver-table="stats" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006279 2566 flags.go:64] FLAG: --storage-driver-user="root" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006282 2566 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006284 2566 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006287 2566 flags.go:64] FLAG: --system-cgroups="" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006290 2566 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006295 2566 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006298 2566 flags.go:64] FLAG: --tls-cert-file="" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006314 2566 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006319 2566 flags.go:64] FLAG: --tls-min-version="" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006322 2566 flags.go:64] FLAG: --tls-private-key-file="" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006324 2566 flags.go:64] FLAG: --topology-manager-policy="none" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006327 2566 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006330 2566 flags.go:64] FLAG: --topology-manager-scope="container" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006333 2566 flags.go:64] FLAG: --v="2" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006337 2566 flags.go:64] FLAG: --version="false" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006342 2566 flags.go:64] FLAG: --vmodule="" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006346 2566 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.006349 2566 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006441 2566 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006444 2566 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:21.012052 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006447 2566 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006449 2566 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006452 2566 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006455 2566 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006458 2566 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006462 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006464 2566 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006467 2566 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006470 2566 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006472 2566 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006475 2566 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006477 2566 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006479 2566 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006482 2566 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006484 2566 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006487 2566 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006489 2566 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006491 2566 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006494 2566 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006496 2566 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:21.012695 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006499 2566 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006502 2566 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006506 2566 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006512 2566 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006516 2566 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006518 2566 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006521 2566 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006524 2566 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006527 2566 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006530 2566 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006532 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006534 2566 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006537 2566 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006539 2566 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006542 2566 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006544 2566 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006547 2566 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006553 2566 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006555 2566 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:21.013320 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006558 2566 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006560 2566 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006563 2566 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006565 2566 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006568 2566 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006570 2566 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006573 2566 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006575 2566 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006579 2566 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006582 2566 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006586 2566 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006588 2566 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006591 2566 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006593 2566 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006596 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006598 2566 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006602 2566 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006604 2566 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006607 2566 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:21.014073 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006610 2566 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006612 2566 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006615 2566 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006618 2566 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006620 2566 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006623 2566 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006625 2566 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006628 2566 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006630 2566 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006633 2566 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006636 2566 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006639 2566 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006642 2566 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006644 2566 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006647 2566 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006649 2566 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006652 2566 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006654 2566 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006657 2566 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006659 2566 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:21.014743 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006662 2566 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:21.015611 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006664 2566 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:21.015611 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006666 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:21.015611 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006669 2566 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:21.015611 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006671 2566 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:21.015611 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.006674 2566 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:21.015611 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.007364 2566 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:21.016385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.016363 2566 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 23 17:52:21.016446 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.016389 2566 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 17:52:21.016490 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016469 2566 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:21.016490 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016477 2566 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:21.016490 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016482 2566 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:21.016490 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016487 2566 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016492 2566 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016496 2566 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016500 2566 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016504 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016509 2566 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016513 2566 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016517 2566 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016521 2566 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016526 2566 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016530 2566 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016534 2566 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016537 2566 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016541 2566 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016547 2566 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016551 2566 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016555 2566 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016559 2566 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016563 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016567 2566 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:21.016676 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016571 2566 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016575 2566 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016579 2566 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016582 2566 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016587 2566 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016590 2566 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016594 2566 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016599 2566 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016603 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016607 2566 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016611 2566 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016615 2566 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016619 2566 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016623 2566 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016628 2566 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016632 2566 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016636 2566 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016640 2566 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016644 2566 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:21.017694 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016649 2566 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016655 2566 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016660 2566 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016664 2566 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016668 2566 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016674 2566 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016680 2566 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016685 2566 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016690 2566 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016694 2566 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016698 2566 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016702 2566 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016706 2566 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016710 2566 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016714 2566 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016718 2566 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016722 2566 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016726 2566 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016731 2566 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016735 2566 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:21.018196 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016739 2566 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016743 2566 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016747 2566 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016751 2566 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016755 2566 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016759 2566 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016765 2566 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016773 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016778 2566 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016783 2566 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016787 2566 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016791 2566 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016796 2566 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016800 2566 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016806 2566 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016810 2566 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016815 2566 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016818 2566 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016822 2566 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016827 2566 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:21.018753 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016831 2566 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016835 2566 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016839 2566 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.016843 2566 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.016850 2566 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017022 2566 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017031 2566 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017035 2566 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017040 2566 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017045 2566 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017049 2566 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017054 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017058 2566 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017062 2566 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017069 2566 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:21.019516 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017075 2566 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017079 2566 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017084 2566 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017088 2566 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017094 2566 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017099 2566 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017104 2566 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017108 2566 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017112 2566 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017116 2566 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017120 2566 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017124 2566 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017130 2566 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017134 2566 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017139 2566 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017143 2566 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017147 2566 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017151 2566 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017155 2566 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017159 2566 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:21.019906 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017164 2566 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017168 2566 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017172 2566 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017176 2566 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017180 2566 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017184 2566 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017188 2566 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017192 2566 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017196 2566 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017200 2566 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017204 2566 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017208 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017212 2566 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017216 2566 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017220 2566 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017225 2566 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017229 2566 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017234 2566 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017238 2566 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017242 2566 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:21.020464 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017247 2566 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017250 2566 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017254 2566 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017258 2566 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017262 2566 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017267 2566 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017272 2566 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017275 2566 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017279 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017284 2566 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017288 2566 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017292 2566 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017295 2566 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017321 2566 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017326 2566 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017329 2566 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017333 2566 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017337 2566 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017340 2566 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:21.020944 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017345 2566 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017350 2566 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017354 2566 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017358 2566 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017362 2566 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017366 2566 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017370 2566 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017374 2566 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017378 2566 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017382 2566 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017386 2566 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017390 2566 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017394 2566 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017398 2566 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017402 2566 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017405 2566 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:21.021532 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:21.017409 2566 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:21.022057 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.017416 2566 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:21.022057 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.018160 2566 server.go:962] "Client rotation is on, will bootstrap in background" Apr 23 17:52:21.023095 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.023077 2566 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 23 17:52:21.024075 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.024062 2566 server.go:1019] "Starting client certificate rotation" Apr 23 17:52:21.024187 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.024168 2566 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:21.024223 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.024210 2566 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:21.045072 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.045046 2566 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:21.047268 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.047247 2566 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:21.056586 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.056571 2566 log.go:25] "Validated CRI v1 runtime API" Apr 23 17:52:21.062867 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.062852 2566 log.go:25] "Validated CRI v1 image API" Apr 23 17:52:21.064785 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.064763 2566 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 17:52:21.071414 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.071389 2566 fs.go:135] Filesystem UUIDs: map[65ca1892-34c1-4513-bd73-5135d73ce3e0:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2 a0b1af54-ee29-4333-b389-89064f83ae21:/dev/nvme0n1p4] Apr 23 17:52:21.071504 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.071412 2566 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 23 17:52:21.073956 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.073929 2566 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:52:21.077489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.077384 2566 manager.go:217] Machine: {Timestamp:2026-04-23 17:52:21.075588954 +0000 UTC m=+0.370867545 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3040697 MemoryCapacity:32812171264 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2340f49b9bd1d9f398996ffd674f0f SystemUUID:ec2340f4-9b9b-d1d9-f398-996ffd674f0f BootID:d4eb942a-c58f-4032-afed-32d7ee6011a4 Filesystems:[{Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16406085632 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16406085632 Type:vfs Inodes:4005392 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6562435072 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:e6:c0:28:0f:67 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:e6:c0:28:0f:67 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:12:00:9b:bf:33:7c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:32812171264 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:34603008 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 23 17:52:21.077489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.077481 2566 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 23 17:52:21.077622 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.077564 2566 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 23 17:52:21.077872 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.077850 2566 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 17:52:21.078015 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.077873 2566 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-136-172.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 17:52:21.078061 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.078025 2566 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 17:52:21.078061 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.078033 2566 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 17:52:21.078061 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.078046 2566 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:21.078761 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.078751 2566 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:21.079871 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.079861 2566 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:21.079974 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.079965 2566 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 23 17:52:21.082180 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.082169 2566 kubelet.go:491] "Attempting to sync node with API server" Apr 23 17:52:21.082226 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.082185 2566 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 17:52:21.082226 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.082199 2566 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 23 17:52:21.082226 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.082208 2566 kubelet.go:397] "Adding apiserver pod source" Apr 23 17:52:21.082226 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.082222 2566 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 17:52:21.083385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.083226 2566 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:21.083463 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.083393 2566 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:21.086251 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.086233 2566 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 23 17:52:21.087497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.087483 2566 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 17:52:21.088738 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088725 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 23 17:52:21.088798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088742 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 23 17:52:21.088798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088748 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 23 17:52:21.088798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088753 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 23 17:52:21.088798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088759 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 23 17:52:21.088798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088765 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 23 17:52:21.088798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088771 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 23 17:52:21.088798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088784 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 23 17:52:21.088798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088792 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 23 17:52:21.088798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088798 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 23 17:52:21.089123 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088811 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 23 17:52:21.089123 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.088820 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 23 17:52:21.089644 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.089632 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 23 17:52:21.089644 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.089642 2566 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 23 17:52:21.093108 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.093094 2566 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 17:52:21.093195 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.093130 2566 server.go:1295] "Started kubelet" Apr 23 17:52:21.093248 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.093194 2566 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 17:52:21.094145 ip-10-0-136-172 systemd[1]: Started Kubernetes Kubelet. Apr 23 17:52:21.094280 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.094221 2566 server.go:317] "Adding debug handlers to kubelet server" Apr 23 17:52:21.094360 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.094280 2566 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 17:52:21.094414 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.094384 2566 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 23 17:52:21.094632 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.094595 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:21.094632 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.094618 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:21.094884 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.094761 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:21.095537 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.095520 2566 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 17:52:21.100382 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.100355 2566 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-krxpk" Apr 23 17:52:21.101640 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.100615 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd61489c32d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.093106477 +0000 UTC m=+0.388385068,LastTimestamp:2026-04-23 17:52:21.093106477 +0000 UTC m=+0.388385068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.101775 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.101758 2566 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 17:52:21.101992 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.101971 2566 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 23 17:52:21.102539 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.102516 2566 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 17:52:21.102539 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.102538 2566 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 23 17:52:21.102665 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.102593 2566 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 17:52:21.102724 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.102680 2566 reconstruct.go:97] "Volume reconstruction finished" Apr 23 17:52:21.102724 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.102694 2566 reconciler.go:26] "Reconciler: start to sync state" Apr 23 17:52:21.103235 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.103220 2566 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 23 17:52:21.103317 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.103237 2566 factory.go:55] Registering systemd factory Apr 23 17:52:21.103317 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.103245 2566 factory.go:223] Registration of the systemd container factory successfully Apr 23 17:52:21.103707 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.103685 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:52:21.103775 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.103758 2566 factory.go:153] Registering CRI-O factory Apr 23 17:52:21.103775 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.103775 2566 factory.go:223] Registration of the crio container factory successfully Apr 23 17:52:21.103903 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.103801 2566 factory.go:103] Registering Raw factory Apr 23 17:52:21.103903 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.103817 2566 manager.go:1196] Started watching for new ooms in manager Apr 23 17:52:21.104385 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.104351 2566 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 23 17:52:21.104872 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.104851 2566 manager.go:319] Starting recovery of all containers Apr 23 17:52:21.107250 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.107214 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:21.107360 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.107341 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 23 17:52:21.115467 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.115448 2566 manager.go:324] Recovery completed Apr 23 17:52:21.119787 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.119772 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:21.122562 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.122545 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:21.122643 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.122576 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:21.122643 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.122589 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:21.123110 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.123093 2566 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 23 17:52:21.123110 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.123109 2566 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 23 17:52:21.123207 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.123124 2566 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:21.125538 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.125524 2566 policy_none.go:49] "None policy: Start" Apr 23 17:52:21.125611 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.125543 2566 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 17:52:21.125611 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.125580 2566 state_mem.go:35] "Initializing new in-memory state store" Apr 23 17:52:21.127022 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.124136 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b36c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122561737 +0000 UTC m=+0.417840326,LastTimestamp:2026-04-23 17:52:21.122561737 +0000 UTC m=+0.417840326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.136429 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.136358 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b8252 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122581074 +0000 UTC m=+0.417859661,LastTimestamp:2026-04-23 17:52:21.122581074 +0000 UTC m=+0.417859661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.143963 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.143865 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164bafb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122592689 +0000 UTC m=+0.417871277,LastTimestamp:2026-04-23 17:52:21.122592689 +0000 UTC m=+0.417871277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.163362 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.163335 2566 manager.go:341] "Starting Device Plugin manager" Apr 23 17:52:21.173212 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.163465 2566 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 17:52:21.173212 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.163503 2566 server.go:85] "Starting device plugin registration server" Apr 23 17:52:21.173212 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.163724 2566 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 17:52:21.173212 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.163734 2566 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 17:52:21.173212 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.163846 2566 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 23 17:52:21.173212 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.163938 2566 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 23 17:52:21.173212 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.163945 2566 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 17:52:21.173212 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.164521 2566 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 23 17:52:21.173212 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.164551 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:52:21.177220 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.177138 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd618e09f4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.165907787 +0000 UTC m=+0.461186363,LastTimestamp:2026-04-23 17:52:21.165907787 +0000 UTC m=+0.461186363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.247438 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.247394 2566 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 17:52:21.248908 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.248891 2566 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 17:52:21.249044 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.248922 2566 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 17:52:21.249044 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.248944 2566 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 17:52:21.249044 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.248953 2566 kubelet.go:2451] "Starting kubelet main sync loop" Apr 23 17:52:21.249044 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.248992 2566 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 23 17:52:21.256665 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.256637 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:21.263832 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.263815 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:21.264851 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.264833 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:21.264935 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.264866 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:21.264935 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.264876 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:21.264935 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.264900 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.273523 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.273445 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b36c9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b36c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122561737 +0000 UTC m=+0.417840326,LastTimestamp:2026-04-23 17:52:21.264850176 +0000 UTC m=+0.560128765,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.285489 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.285458 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.285573 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.285449 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b8252\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b8252 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122581074 +0000 UTC m=+0.417859661,LastTimestamp:2026-04-23 17:52:21.264870472 +0000 UTC m=+0.560149060,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.294547 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.294476 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164bafb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164bafb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122592689 +0000 UTC m=+0.417871277,LastTimestamp:2026-04-23 17:52:21.26488205 +0000 UTC m=+0.560160637,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.314498 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.314473 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Apr 23 17:52:21.349770 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.349702 2566 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal"] Apr 23 17:52:21.349884 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.349841 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:21.350842 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.350825 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:21.350940 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.350856 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:21.350940 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.350867 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:21.352049 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.352035 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:21.352182 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.352164 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.352232 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.352199 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:21.352734 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.352718 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:21.352822 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.352748 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:21.352822 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.352760 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:21.352822 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.352815 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:21.352956 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.352840 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:21.352956 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.352855 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:21.353847 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.353831 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.353930 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.353860 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:21.354551 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.354537 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:21.354626 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.354561 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:21.354626 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.354574 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:21.361966 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.361902 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b36c9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b36c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122561737 +0000 UTC m=+0.417840326,LastTimestamp:2026-04-23 17:52:21.350842225 +0000 UTC m=+0.646120816,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.365845 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.365830 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.368963 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.368895 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b8252\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b8252 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122581074 +0000 UTC m=+0.417859661,LastTimestamp:2026-04-23 17:52:21.350861042 +0000 UTC m=+0.646139634,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.370361 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.370343 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.377121 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.377060 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164bafb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164bafb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122592689 +0000 UTC m=+0.417871277,LastTimestamp:2026-04-23 17:52:21.350871284 +0000 UTC m=+0.646149875,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.383998 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.383936 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b36c9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b36c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122561737 +0000 UTC m=+0.417840326,LastTimestamp:2026-04-23 17:52:21.352734574 +0000 UTC m=+0.648013162,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.393558 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.393500 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b8252\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b8252 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122581074 +0000 UTC m=+0.417859661,LastTimestamp:2026-04-23 17:52:21.352752895 +0000 UTC m=+0.648031483,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.400371 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.400277 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164bafb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164bafb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122592689 +0000 UTC m=+0.417871277,LastTimestamp:2026-04-23 17:52:21.352765273 +0000 UTC m=+0.648043863,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.404196 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.404178 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7fc0473024b4c48d914a6628102ac7a2-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal\" (UID: \"7fc0473024b4c48d914a6628102ac7a2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.404280 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.404209 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fc0473024b4c48d914a6628102ac7a2-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal\" (UID: \"7fc0473024b4c48d914a6628102ac7a2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.404280 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.404237 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/feca641f7e256521d5e07f060738f192-config\") pod \"kube-apiserver-proxy-ip-10-0-136-172.ec2.internal\" (UID: \"feca641f7e256521d5e07f060738f192\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.410464 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.410403 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b36c9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b36c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122561737 +0000 UTC m=+0.417840326,LastTimestamp:2026-04-23 17:52:21.352828741 +0000 UTC m=+0.648107525,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.418820 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.418760 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b8252\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b8252 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122581074 +0000 UTC m=+0.417859661,LastTimestamp:2026-04-23 17:52:21.352846939 +0000 UTC m=+0.648125531,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.428764 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.428695 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164bafb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164bafb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122592689 +0000 UTC m=+0.417871277,LastTimestamp:2026-04-23 17:52:21.352861424 +0000 UTC m=+0.648140015,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.439021 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.438962 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b36c9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b36c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122561737 +0000 UTC m=+0.417840326,LastTimestamp:2026-04-23 17:52:21.35455107 +0000 UTC m=+0.649829659,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.447904 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.447849 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b8252\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b8252 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122581074 +0000 UTC m=+0.417859661,LastTimestamp:2026-04-23 17:52:21.354568337 +0000 UTC m=+0.649846929,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.454419 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.454356 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164bafb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164bafb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122592689 +0000 UTC m=+0.417871277,LastTimestamp:2026-04-23 17:52:21.354578231 +0000 UTC m=+0.649856819,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.486614 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.486581 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:21.487589 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.487572 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:21.487662 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.487607 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:21.487662 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.487617 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:21.487662 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.487649 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.496548 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.496472 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b36c9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b36c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122561737 +0000 UTC m=+0.417840326,LastTimestamp:2026-04-23 17:52:21.487591769 +0000 UTC m=+0.782870357,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.503619 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.503559 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.503718 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.503602 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b8252\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b8252 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122581074 +0000 UTC m=+0.417859661,LastTimestamp:2026-04-23 17:52:21.487612208 +0000 UTC m=+0.782890796,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.505372 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.505345 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7fc0473024b4c48d914a6628102ac7a2-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal\" (UID: \"7fc0473024b4c48d914a6628102ac7a2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.505452 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.505380 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fc0473024b4c48d914a6628102ac7a2-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal\" (UID: \"7fc0473024b4c48d914a6628102ac7a2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.505452 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.505397 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/feca641f7e256521d5e07f060738f192-config\") pod \"kube-apiserver-proxy-ip-10-0-136-172.ec2.internal\" (UID: \"feca641f7e256521d5e07f060738f192\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.505452 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.505441 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7fc0473024b4c48d914a6628102ac7a2-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal\" (UID: \"7fc0473024b4c48d914a6628102ac7a2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.505452 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.505446 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7fc0473024b4c48d914a6628102ac7a2-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal\" (UID: \"7fc0473024b4c48d914a6628102ac7a2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.505571 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.505481 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/feca641f7e256521d5e07f060738f192-config\") pod \"kube-apiserver-proxy-ip-10-0-136-172.ec2.internal\" (UID: \"feca641f7e256521d5e07f060738f192\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.511980 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.511925 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164bafb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164bafb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122592689 +0000 UTC m=+0.417871277,LastTimestamp:2026-04-23 17:52:21.487621271 +0000 UTC m=+0.782899859,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.669888 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.669858 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.672764 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.672744 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.723611 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.723581 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Apr 23 17:52:21.904419 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.904336 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:21.905242 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.905226 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:21.905337 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.905258 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:21.905337 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.905268 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:21.905337 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:21.905296 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.912333 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.912221 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b36c9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b36c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122561737 +0000 UTC m=+0.417840326,LastTimestamp:2026-04-23 17:52:21.905241443 +0000 UTC m=+1.200520030,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:21.912482 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.912413 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:21.913539 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.913521 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:21.913776 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:21.913720 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-172.ec2.internal.18a90dd6164b8252\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-172.ec2.internal.18a90dd6164b8252 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-172.ec2.internal,UID:ip-10-0-136-172.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-172.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.122581074 +0000 UTC m=+0.417859661,LastTimestamp:2026-04-23 17:52:21.905263166 +0000 UTC m=+1.200541754,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:22.043452 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:22.043422 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:22.091047 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:22.091016 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:22.104028 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:22.104002 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:22.136461 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:22.136424 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fc0473024b4c48d914a6628102ac7a2.slice/crio-148b9ec767f1d5a7e27e164dd6635185a8064e4b5510a4434662d05ce23333f1 WatchSource:0}: Error finding container 148b9ec767f1d5a7e27e164dd6635185a8064e4b5510a4434662d05ce23333f1: Status 404 returned error can't find the container with id 148b9ec767f1d5a7e27e164dd6635185a8064e4b5510a4434662d05ce23333f1 Apr 23 17:52:22.136929 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:52:22.136913 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeca641f7e256521d5e07f060738f192.slice/crio-d99f731cb6f3641be50cdc8c098cb171f8073ff5e3116982f96c18d3247a4eaf WatchSource:0}: Error finding container d99f731cb6f3641be50cdc8c098cb171f8073ff5e3116982f96c18d3247a4eaf: Status 404 returned error can't find the container with id d99f731cb6f3641be50cdc8c098cb171f8073ff5e3116982f96c18d3247a4eaf Apr 23 17:52:22.140651 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:22.140638 2566 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:52:22.148488 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:22.148415 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-136-172.ec2.internal.18a90dd652fd446c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-136-172.ec2.internal,UID:feca641f7e256521d5e07f060738f192,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\",Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.140863596 +0000 UTC m=+1.436142171,LastTimestamp:2026-04-23 17:52:22.140863596 +0000 UTC m=+1.436142171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:22.157251 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:22.157164 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd652fe1828 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\",Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.1409178 +0000 UTC m=+1.436196375,LastTimestamp:2026-04-23 17:52:22.1409178 +0000 UTC m=+1.436196375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:22.252653 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:22.252605 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" event={"ID":"7fc0473024b4c48d914a6628102ac7a2","Type":"ContainerStarted","Data":"148b9ec767f1d5a7e27e164dd6635185a8064e4b5510a4434662d05ce23333f1"} Apr 23 17:52:22.253446 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:22.253425 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal" event={"ID":"feca641f7e256521d5e07f060738f192","Type":"ContainerStarted","Data":"d99f731cb6f3641be50cdc8c098cb171f8073ff5e3116982f96c18d3247a4eaf"} Apr 23 17:52:22.460664 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:22.460561 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:22.533291 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:22.533255 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Apr 23 17:52:22.715419 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:22.715176 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:22.716686 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:22.716279 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:22.716686 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:22.716340 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:22.716686 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:22.716359 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:22.716686 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:22.716396 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:22.733121 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:22.733090 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:23.102137 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:23.102060 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:23.693669 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:23.693592 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-136-172.ec2.internal.18a90dd6aef83a25 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-136-172.ec2.internal,UID:feca641f7e256521d5e07f060738f192,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\" in 1.543s (1.543s including waiting). Image size: 488332864 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:23.684037157 +0000 UTC m=+2.979315752,LastTimestamp:2026-04-23 17:52:23.684037157 +0000 UTC m=+2.979315752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:23.700992 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:23.700794 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd6af072f19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" in 1.544s (1.544s including waiting). Image size: 468435751 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:23.685017369 +0000 UTC m=+2.980295963,LastTimestamp:2026-04-23 17:52:23.685017369 +0000 UTC m=+2.980295963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:23.766411 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:23.766242 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-136-172.ec2.internal.18a90dd6b33c26da kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-136-172.ec2.internal,UID:feca641f7e256521d5e07f060738f192,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Created,Message:Created container: haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:23.75559753 +0000 UTC m=+3.050876118,LastTimestamp:2026-04-23 17:52:23.75559753 +0000 UTC m=+3.050876118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:23.773637 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:23.773566 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-136-172.ec2.internal.18a90dd6b397fd9f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-136-172.ec2.internal,UID:feca641f7e256521d5e07f060738f192,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Started,Message:Started container haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:23.761616287 +0000 UTC m=+3.056894861,LastTimestamp:2026-04-23 17:52:23.761616287 +0000 UTC m=+3.056894861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:23.773748 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:23.773718 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:24.103895 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.103805 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:24.144102 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:24.144066 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Apr 23 17:52:24.204259 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:24.204153 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd6cd82b6d3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:24.196429523 +0000 UTC m=+3.491708112,LastTimestamp:2026-04-23 17:52:24.196429523 +0000 UTC m=+3.491708112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:24.214399 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:24.214339 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd6cdf32e0e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:24.203800078 +0000 UTC m=+3.499078667,LastTimestamp:2026-04-23 17:52:24.203800078 +0000 UTC m=+3.499078667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:24.258096 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.258066 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" event={"ID":"7fc0473024b4c48d914a6628102ac7a2","Type":"ContainerStarted","Data":"70b29c4e7b25e7df921bcb39aa5765dc82b40aba8190450088d76b40077d5faa"} Apr 23 17:52:24.258248 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.258118 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:24.258996 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.258977 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:24.259093 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.259015 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:24.259093 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.259029 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:24.259231 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:24.259216 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:24.259483 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.259467 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal" event={"ID":"feca641f7e256521d5e07f060738f192","Type":"ContainerStarted","Data":"e53ea91dd1e54239ef2d8e3e6efef972f617d4649cb65842cac9d1d46d0a6833"} Apr 23 17:52:24.259536 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.259515 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:24.260245 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.260231 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:24.260327 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.260256 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:24.260327 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.260265 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:24.260424 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:24.260413 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:24.333899 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.333868 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:24.335183 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.335163 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:24.335275 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.335196 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:24.335275 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.335208 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:24.335275 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:24.335236 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:24.351094 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:24.351064 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:24.710180 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:24.710153 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:24.840461 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:24.840432 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:25.101388 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.101336 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:25.256150 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:25.256115 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:25.262494 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.262467 2566 generic.go:358] "Generic (PLEG): container finished" podID="7fc0473024b4c48d914a6628102ac7a2" containerID="70b29c4e7b25e7df921bcb39aa5765dc82b40aba8190450088d76b40077d5faa" exitCode=0 Apr 23 17:52:25.262612 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.262513 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" event={"ID":"7fc0473024b4c48d914a6628102ac7a2","Type":"ContainerDied","Data":"70b29c4e7b25e7df921bcb39aa5765dc82b40aba8190450088d76b40077d5faa"} Apr 23 17:52:25.262612 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.262546 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:25.262612 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.262554 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:25.263443 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.263426 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:25.263520 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.263455 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:25.263520 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.263469 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:25.263520 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.263434 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:25.263634 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.263543 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:25.263634 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:25.263560 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:25.263705 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:25.263659 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:25.263745 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:25.263723 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:25.274137 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:25.274061 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd70d3d0917 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.265604887 +0000 UTC m=+4.560883480,LastTimestamp:2026-04-23 17:52:25.265604887 +0000 UTC m=+4.560883480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:25.375319 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:25.375198 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd71329fb56 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.365019478 +0000 UTC m=+4.660298057,LastTimestamp:2026-04-23 17:52:25.365019478 +0000 UTC m=+4.660298057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:25.384346 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:25.384250 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd713a8acb9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.373322425 +0000 UTC m=+4.668601014,LastTimestamp:2026-04-23 17:52:25.373322425 +0000 UTC m=+4.668601014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:26.104140 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:26.104115 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:26.265693 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:26.265667 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/0.log" Apr 23 17:52:26.266061 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:26.265968 2566 generic.go:358] "Generic (PLEG): container finished" podID="7fc0473024b4c48d914a6628102ac7a2" containerID="1f21dc2fc4fec2ec7942e39e7dff78a1f38216a54950f739494bbfeeef8d2a3a" exitCode=1 Apr 23 17:52:26.266061 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:26.266006 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" event={"ID":"7fc0473024b4c48d914a6628102ac7a2","Type":"ContainerDied","Data":"1f21dc2fc4fec2ec7942e39e7dff78a1f38216a54950f739494bbfeeef8d2a3a"} Apr 23 17:52:26.266061 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:26.266044 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:26.267296 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:26.267281 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:26.267358 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:26.267325 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:26.267358 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:26.267336 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:26.267505 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:26.267493 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:26.267544 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:26.267536 2566 scope.go:117] "RemoveContainer" containerID="1f21dc2fc4fec2ec7942e39e7dff78a1f38216a54950f739494bbfeeef8d2a3a" Apr 23 17:52:26.275756 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:26.275687 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd70d3d0917\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd70d3d0917 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.265604887 +0000 UTC m=+4.560883480,LastTimestamp:2026-04-23 17:52:26.26933185 +0000 UTC m=+5.564610441,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:26.371230 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:26.371153 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd71329fb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd71329fb56 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.365019478 +0000 UTC m=+4.660298057,LastTimestamp:2026-04-23 17:52:26.360195279 +0000 UTC m=+5.655473866,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:26.379234 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:26.379142 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd713a8acb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd713a8acb9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.373322425 +0000 UTC m=+4.668601014,LastTimestamp:2026-04-23 17:52:26.367832521 +0000 UTC m=+5.663111110,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:27.106049 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.106022 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:27.269344 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.269296 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/1.log" Apr 23 17:52:27.269726 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.269710 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/0.log" Apr 23 17:52:27.270027 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.270008 2566 generic.go:358] "Generic (PLEG): container finished" podID="7fc0473024b4c48d914a6628102ac7a2" containerID="885dc1170cbf7a1767fc909e82e5d5f916b57e6c28910dfefc9c8976d40478ab" exitCode=1 Apr 23 17:52:27.270070 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.270042 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" event={"ID":"7fc0473024b4c48d914a6628102ac7a2","Type":"ContainerDied","Data":"885dc1170cbf7a1767fc909e82e5d5f916b57e6c28910dfefc9c8976d40478ab"} Apr 23 17:52:27.270070 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.270068 2566 scope.go:117] "RemoveContainer" containerID="1f21dc2fc4fec2ec7942e39e7dff78a1f38216a54950f739494bbfeeef8d2a3a" Apr 23 17:52:27.270150 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.270088 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:27.270957 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.270935 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:27.271058 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.270973 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:27.271058 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.270988 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:27.271751 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:27.271735 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:27.271792 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.271787 2566 scope.go:117] "RemoveContainer" containerID="885dc1170cbf7a1767fc909e82e5d5f916b57e6c28910dfefc9c8976d40478ab" Apr 23 17:52:27.271938 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:27.271914 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:52:27.279494 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:27.279379 2566 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2),Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:27.271883219 +0000 UTC m=+6.567161806,LastTimestamp:2026-04-23 17:52:27.271883219 +0000 UTC m=+6.567161806,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:27.354269 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:27.354232 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Apr 23 17:52:27.551256 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.551173 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:27.552413 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.552387 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:27.552511 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.552425 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:27.552511 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.552439 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:27.552511 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:27.552478 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:27.568882 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:27.568852 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:27.796971 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:27.796935 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:28.111892 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:28.111864 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:28.272946 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:28.272922 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/1.log" Apr 23 17:52:28.273873 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:28.273847 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:28.274995 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:28.274970 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:28.275089 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:28.275005 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:28.275089 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:28.275017 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:28.275223 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:28.275209 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:28.275275 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:28.275264 2566 scope.go:117] "RemoveContainer" containerID="885dc1170cbf7a1767fc909e82e5d5f916b57e6c28910dfefc9c8976d40478ab" Apr 23 17:52:28.275415 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:28.275400 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:52:28.282814 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:28.282748 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2),Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:27.271883219 +0000 UTC m=+6.567161806,LastTimestamp:2026-04-23 17:52:28.275374696 +0000 UTC m=+7.570653283,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:29.104473 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:29.104446 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:29.623054 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:29.623027 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:29.854100 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:29.854068 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:30.102476 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:30.102422 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:31.011992 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:31.011962 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:31.101763 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:31.101740 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:31.165402 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:31.165373 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:52:32.104657 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:32.104626 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:33.102536 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:33.102506 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:33.764706 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:33.764674 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:33.969731 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:33.969697 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:33.973145 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:33.973129 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:33.973225 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:33.973159 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:33.973225 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:33.973170 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:33.973225 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:33.973198 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:33.991446 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:33.991416 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:34.103337 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:34.103257 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:35.104631 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:35.104600 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:36.102802 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:36.102775 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:37.103560 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:37.103525 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:37.200394 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:37.200358 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:37.617959 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:37.617924 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:37.868168 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:37.868090 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:38.104513 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:38.104478 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:39.103574 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:39.103548 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:40.104260 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:40.104229 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:40.461861 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:40.461782 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:40.776248 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:40.776160 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:40.991809 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:40.991769 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:40.992903 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:40.992879 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:40.993010 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:40.992916 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:40.993010 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:40.992927 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:40.993010 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:40.992955 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:41.011077 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:41.011039 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:41.102860 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:41.102833 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:41.166388 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:41.166355 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:52:41.250079 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:41.250048 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:41.251093 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:41.251071 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:41.251175 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:41.251103 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:41.251175 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:41.251116 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:41.251374 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:41.251361 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:41.251443 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:41.251410 2566 scope.go:117] "RemoveContainer" containerID="885dc1170cbf7a1767fc909e82e5d5f916b57e6c28910dfefc9c8976d40478ab" Apr 23 17:52:41.264977 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:41.264878 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd70d3d0917\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd70d3d0917 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.265604887 +0000 UTC m=+4.560883480,LastTimestamp:2026-04-23 17:52:41.253391149 +0000 UTC m=+20.548669742,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:41.358593 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:41.358519 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd71329fb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd71329fb56 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.365019478 +0000 UTC m=+4.660298057,LastTimestamp:2026-04-23 17:52:41.351114333 +0000 UTC m=+20.646392912,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:41.369297 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:41.369217 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd713a8acb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd713a8acb9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.373322425 +0000 UTC m=+4.668601014,LastTimestamp:2026-04-23 17:52:41.358957646 +0000 UTC m=+20.654236234,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:42.102507 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.102481 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:42.294814 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.294789 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/2.log" Apr 23 17:52:42.295189 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.295173 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/1.log" Apr 23 17:52:42.295549 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.295526 2566 generic.go:358] "Generic (PLEG): container finished" podID="7fc0473024b4c48d914a6628102ac7a2" containerID="27d5df7b55abb8a8c57648159cb70784497fb2fd641e6a1001bd08b01495153a" exitCode=1 Apr 23 17:52:42.295620 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.295560 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" event={"ID":"7fc0473024b4c48d914a6628102ac7a2","Type":"ContainerDied","Data":"27d5df7b55abb8a8c57648159cb70784497fb2fd641e6a1001bd08b01495153a"} Apr 23 17:52:42.295620 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.295591 2566 scope.go:117] "RemoveContainer" containerID="885dc1170cbf7a1767fc909e82e5d5f916b57e6c28910dfefc9c8976d40478ab" Apr 23 17:52:42.295685 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.295663 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:42.297631 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.297617 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:42.297686 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.297646 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:42.297686 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.297657 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:42.298548 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:42.297853 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:42.298548 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:42.297905 2566 scope.go:117] "RemoveContainer" containerID="27d5df7b55abb8a8c57648159cb70784497fb2fd641e6a1001bd08b01495153a" Apr 23 17:52:42.298548 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:42.298068 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:52:42.309261 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:42.309191 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2),Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:27.271883219 +0000 UTC m=+6.567161806,LastTimestamp:2026-04-23 17:52:42.298032873 +0000 UTC m=+21.593311472,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:43.103418 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:43.103390 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:43.298553 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:43.298529 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/2.log" Apr 23 17:52:44.104876 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:44.104848 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:45.102876 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:45.102846 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:46.104062 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:46.104034 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:47.102457 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:47.102427 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:47.785846 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:47.785812 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:48.011936 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:48.011894 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:48.013095 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:48.013076 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:48.013193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:48.013111 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:48.013193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:48.013126 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:48.013193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:48.013156 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:48.028970 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:48.028944 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:48.102378 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:48.102354 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:49.105979 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:49.105948 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:50.102275 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:50.102246 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:51.109973 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:51.109945 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:51.166943 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:51.166909 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:52:52.102355 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:52.102328 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:53.104263 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:53.104227 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:53.336127 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:53.336096 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:54.103682 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:54.103655 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:54.337424 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:54.337395 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:54.798717 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:54.798682 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:55.029193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.029156 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:55.030185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.030168 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:55.030236 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.030215 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:55.030236 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.030231 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:55.030294 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.030256 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:55.047752 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:55.047728 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:55.102937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.102888 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:55.249960 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.249930 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:55.250948 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.250931 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:55.251011 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.250966 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:55.251011 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.250977 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:55.251188 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:55.251176 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:52:55.251235 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:55.251226 2566 scope.go:117] "RemoveContainer" containerID="27d5df7b55abb8a8c57648159cb70784497fb2fd641e6a1001bd08b01495153a" Apr 23 17:52:55.251382 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:55.251367 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:52:55.260042 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:55.259957 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2),Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:27.271883219 +0000 UTC m=+6.567161806,LastTimestamp:2026-04-23 17:52:55.251341268 +0000 UTC m=+34.546619856,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:52:56.101975 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:56.101943 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:57.103773 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:57.103745 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:57.181936 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:52:57.181907 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:58.104208 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:58.104180 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:59.103127 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:52:59.103093 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:00.103633 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:00.103602 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:00.691465 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:00.691439 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:53:01.103460 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:01.103439 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:01.168037 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:01.168002 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:53:01.807460 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:01.807429 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:02.048253 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:02.048212 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:02.049263 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:02.049245 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:02.049374 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:02.049281 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:02.049374 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:02.049295 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:02.049374 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:02.049345 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:02.067796 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:02.067741 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:02.104489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:02.104470 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:03.102741 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:03.102711 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:04.105515 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:04.105483 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:05.105255 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:05.105226 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:06.104588 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:06.104557 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:07.104358 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:07.104327 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:07.249654 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:07.249625 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:07.250810 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:07.250792 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:07.250894 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:07.250827 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:07.250894 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:07.250837 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:07.251062 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:07.251050 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:07.251107 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:07.251098 2566 scope.go:117] "RemoveContainer" containerID="27d5df7b55abb8a8c57648159cb70784497fb2fd641e6a1001bd08b01495153a" Apr 23 17:53:07.262826 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:07.262747 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd70d3d0917\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd70d3d0917 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.265604887 +0000 UTC m=+4.560883480,LastTimestamp:2026-04-23 17:53:07.25187028 +0000 UTC m=+46.547148855,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:07.351113 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:07.351025 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd71329fb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd71329fb56 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.365019478 +0000 UTC m=+4.660298057,LastTimestamp:2026-04-23 17:53:07.343614705 +0000 UTC m=+46.638893281,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:07.362222 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:07.362117 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd713a8acb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd713a8acb9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.373322425 +0000 UTC m=+4.668601014,LastTimestamp:2026-04-23 17:53:07.352366859 +0000 UTC m=+46.647645447,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:08.104518 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.104482 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:08.334837 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.334809 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/3.log" Apr 23 17:53:08.335215 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.335178 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/2.log" Apr 23 17:53:08.335514 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.335494 2566 generic.go:358] "Generic (PLEG): container finished" podID="7fc0473024b4c48d914a6628102ac7a2" containerID="7e9651ae8fd49f0cdf94054a6585a65b915aef5e86933947e8d8f51d96e52b50" exitCode=1 Apr 23 17:53:08.335564 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.335526 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" event={"ID":"7fc0473024b4c48d914a6628102ac7a2","Type":"ContainerDied","Data":"7e9651ae8fd49f0cdf94054a6585a65b915aef5e86933947e8d8f51d96e52b50"} Apr 23 17:53:08.335564 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.335557 2566 scope.go:117] "RemoveContainer" containerID="27d5df7b55abb8a8c57648159cb70784497fb2fd641e6a1001bd08b01495153a" Apr 23 17:53:08.335667 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.335654 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:08.336698 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.336641 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:08.336698 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.336673 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:08.336698 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.336685 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:08.336904 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:08.336890 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:08.336957 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:08.336937 2566 scope.go:117] "RemoveContainer" containerID="7e9651ae8fd49f0cdf94054a6585a65b915aef5e86933947e8d8f51d96e52b50" Apr 23 17:53:08.337071 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:08.337057 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:53:08.345949 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:08.345871 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2),Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:27.271883219 +0000 UTC m=+6.567161806,LastTimestamp:2026-04-23 17:53:08.337029198 +0000 UTC m=+47.632307786,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:08.816153 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:08.816122 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:09.068617 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:09.068518 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:09.069942 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:09.069913 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:09.070047 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:09.069954 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:09.070047 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:09.069968 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:09.070047 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:09.070004 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:09.089288 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:09.089259 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:09.107197 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:09.107175 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:09.338276 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:09.338191 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/3.log" Apr 23 17:53:10.101943 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:10.101918 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:11.107766 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:11.107738 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:11.168752 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:11.168715 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:53:12.102590 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:12.102562 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:13.104541 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:13.104510 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:14.103499 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:14.103472 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:15.104705 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:15.104673 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:15.824145 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:15.824117 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:16.090333 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:16.090246 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:16.091330 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:16.091298 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:16.091425 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:16.091346 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:16.091425 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:16.091356 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:16.091425 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:16.091380 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:16.100764 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:16.100743 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:16.109066 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:16.109039 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:17.102150 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:17.102114 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:18.105495 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:18.105468 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:19.102974 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:19.102946 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:19.250045 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:19.250008 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:19.251066 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:19.251046 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:19.251169 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:19.251080 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:19.251169 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:19.251090 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:19.251345 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:19.251332 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:19.251394 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:19.251385 2566 scope.go:117] "RemoveContainer" containerID="7e9651ae8fd49f0cdf94054a6585a65b915aef5e86933947e8d8f51d96e52b50" Apr 23 17:53:19.251523 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:19.251509 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:53:19.262016 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:19.261933 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2),Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:27.271883219 +0000 UTC m=+6.567161806,LastTimestamp:2026-04-23 17:53:19.251480792 +0000 UTC m=+58.546759382,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:20.104147 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:20.104115 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:21.104221 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:21.104193 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:21.168855 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:21.168826 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:53:22.102824 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:22.102796 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:22.833827 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:22.833794 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:23.102489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:23.102412 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:23.109714 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:23.109698 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:23.111482 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:23.111462 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:23.111540 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:23.111497 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:23.111540 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:23.111507 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:23.111540 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:23.111532 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:23.127536 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:23.127517 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:24.105879 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:24.105853 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:25.102446 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:25.102420 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:26.104612 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:26.104581 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:26.726621 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:26.726586 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:53:27.104015 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:27.103985 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:28.103394 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:28.103362 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:29.104675 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:29.104646 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:29.842964 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:29.842933 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:29.972376 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:29.972351 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:53:30.104068 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:30.103993 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:30.128263 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:30.128240 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:30.129457 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:30.129438 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:30.129555 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:30.129469 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:30.129555 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:30.129483 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:30.129555 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:30.129511 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:30.147496 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:30.147470 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:31.104446 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:31.104410 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:31.169061 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:31.169032 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:53:32.102873 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:32.102769 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:33.104968 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:33.104940 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:33.585444 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:33.585404 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:53:34.107321 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:34.107275 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:34.249349 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:34.249287 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:34.250410 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:34.250390 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:34.250515 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:34.250426 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:34.250515 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:34.250441 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:34.250780 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:34.250765 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:34.250837 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:34.250828 2566 scope.go:117] "RemoveContainer" containerID="7e9651ae8fd49f0cdf94054a6585a65b915aef5e86933947e8d8f51d96e52b50" Apr 23 17:53:34.250967 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:34.250953 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:53:34.260110 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:34.260034 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2),Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:27.271883219 +0000 UTC m=+6.567161806,LastTimestamp:2026-04-23 17:53:34.250918936 +0000 UTC m=+73.546197527,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:35.104475 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:35.104442 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:36.102316 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:36.102268 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:36.853991 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:36.853959 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:37.102964 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:37.102925 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:37.148496 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:37.148434 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:37.149548 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:37.149526 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:37.149660 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:37.149558 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:37.149660 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:37.149569 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:37.149660 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:37.149594 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:37.164998 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:37.164965 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:38.103980 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:38.103945 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:39.102789 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:39.102760 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:40.104703 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:40.104669 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:41.103991 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:41.103962 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:41.169668 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:41.169641 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:53:42.104542 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:42.104507 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:43.101931 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:43.101898 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:43.864067 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:43.864025 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:44.103838 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:44.103804 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:44.165554 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:44.165487 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:44.166506 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:44.166488 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:44.166576 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:44.166523 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:44.166576 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:44.166533 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:44.166576 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:44.166564 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:44.183808 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:44.183783 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:44.713372 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:44.713344 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:53:45.102714 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:45.102688 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:46.105831 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:46.105799 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:46.249740 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:46.249706 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:46.250719 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:46.250700 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:46.250811 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:46.250732 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:46.250811 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:46.250742 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:46.250965 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:46.250951 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:47.103314 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:47.103268 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:47.249574 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:47.249536 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:47.250550 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:47.250535 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:47.250645 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:47.250562 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:47.250645 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:47.250572 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:47.250780 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:47.250769 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:47.250824 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:47.250816 2566 scope.go:117] "RemoveContainer" containerID="7e9651ae8fd49f0cdf94054a6585a65b915aef5e86933947e8d8f51d96e52b50" Apr 23 17:53:47.250947 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:47.250933 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:53:47.259645 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:47.259573 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2),Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:27.271883219 +0000 UTC m=+6.567161806,LastTimestamp:2026-04-23 17:53:47.250907717 +0000 UTC m=+86.546186292,Count:8,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:48.101337 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:48.101294 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:49.105196 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:49.105166 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:50.102906 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:50.102875 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:50.874240 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:50.874210 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:51.107348 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:51.107319 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:51.170809 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:51.170739 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:53:51.184021 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:51.183998 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:51.184962 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:51.184944 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:51.185041 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:51.184976 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:51.185041 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:51.184991 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:51.185041 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:51.185029 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:51.205812 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:51.205788 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:52.106101 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:52.106068 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:53.102021 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:53.101993 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:54.104688 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:54.104660 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:55.102931 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:55.102898 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:56.105587 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:56.105555 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:57.106625 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:57.106589 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:57.884361 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:57.884329 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:58.104126 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:58.104096 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:58.205956 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:58.205892 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:58.206908 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:58.206891 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:58.206986 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:58.206924 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:58.206986 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:58.206935 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:58.206986 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:58.206961 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:58.224577 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:58.224546 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:59.104241 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.104208 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:59.249446 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.249421 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:59.252406 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.250613 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:59.252406 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.250670 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:59.252406 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.250697 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:59.252406 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:59.251136 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:59.252406 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.251204 2566 scope.go:117] "RemoveContainer" containerID="7e9651ae8fd49f0cdf94054a6585a65b915aef5e86933947e8d8f51d96e52b50" Apr 23 17:53:59.259782 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:59.259691 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd70d3d0917\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd70d3d0917 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.265604887 +0000 UTC m=+4.560883480,LastTimestamp:2026-04-23 17:53:59.252762556 +0000 UTC m=+98.548041149,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:59.356048 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:59.355927 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd71329fb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd71329fb56 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.365019478 +0000 UTC m=+4.660298057,LastTimestamp:2026-04-23 17:53:59.345808096 +0000 UTC m=+98.641086694,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:59.364349 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:59.364261 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd713a8acb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd713a8acb9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:25.373322425 +0000 UTC m=+4.668601014,LastTimestamp:2026-04-23 17:53:59.354534881 +0000 UTC m=+98.649813469,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:59.406670 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.406649 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 17:53:59.407069 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.407049 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/3.log" Apr 23 17:53:59.407398 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.407376 2566 generic.go:358] "Generic (PLEG): container finished" podID="7fc0473024b4c48d914a6628102ac7a2" containerID="6a71ba03dee221f9bd2a17cfb6108fb69eb3264565066869351d024e28c61fc0" exitCode=1 Apr 23 17:53:59.407482 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.407412 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" event={"ID":"7fc0473024b4c48d914a6628102ac7a2","Type":"ContainerDied","Data":"6a71ba03dee221f9bd2a17cfb6108fb69eb3264565066869351d024e28c61fc0"} Apr 23 17:53:59.407482 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.407437 2566 scope.go:117] "RemoveContainer" containerID="7e9651ae8fd49f0cdf94054a6585a65b915aef5e86933947e8d8f51d96e52b50" Apr 23 17:53:59.407579 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.407542 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:59.408470 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.408452 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:59.408527 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.408494 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:59.408527 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.408510 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:59.408781 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:59.408766 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:53:59.408826 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:53:59.408813 2566 scope.go:117] "RemoveContainer" containerID="6a71ba03dee221f9bd2a17cfb6108fb69eb3264565066869351d024e28c61fc0" Apr 23 17:53:59.408943 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:59.408929 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:53:59.420692 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:59.420623 2566 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal.18a90dd784d269d3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal,UID:7fc0473024b4c48d914a6628102ac7a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2),Source:EventSource{Component:kubelet,Host:ip-10-0-136-172.ec2.internal,},FirstTimestamp:2026-04-23 17:52:27.271883219 +0000 UTC m=+6.567161806,LastTimestamp:2026-04-23 17:53:59.408900096 +0000 UTC m=+98.704178686,Count:9,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-172.ec2.internal,}" Apr 23 17:53:59.587602 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:53:59.587575 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:54:00.106336 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:00.106290 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:00.410641 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:00.410572 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 17:54:01.110455 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:01.110427 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:01.170890 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:01.170856 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:02.103824 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:02.103789 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:03.105185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:03.105152 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:04.103566 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:04.103533 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:04.892576 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:04.892539 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:54:05.106377 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:05.106345 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:05.225045 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:05.224956 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:05.226039 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:05.226016 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:05.226116 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:05.226050 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:05.226116 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:05.226061 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:05.226116 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:05.226085 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:54:05.254620 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:05.254594 2566 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-172.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-172.ec2.internal" Apr 23 17:54:06.103280 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:06.103245 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:07.108703 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:07.108666 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:08.105351 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:08.105295 2566 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-172.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:08.134912 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:08.134881 2566 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-krxpk" Apr 23 17:54:08.390175 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:08.390099 2566 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:09.025391 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.025347 2566 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 23 17:54:09.025561 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.025518 2566 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 23 17:54:09.119789 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.119767 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:09.135906 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.135871 2566 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-22 17:49:08 +0000 UTC" deadline="2027-11-25 16:50:22.39347087 +0000 UTC" Apr 23 17:54:09.135906 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.135903 2566 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="13942h56m13.257570921s" Apr 23 17:54:09.143916 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.143898 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:09.209720 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.209698 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:09.484577 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.484551 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:09.484577 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:09.484577 2566 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:09.521880 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.521850 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:09.538027 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.538000 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:09.601573 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.601541 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:09.875541 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:09.875513 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:09.875541 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:09.875537 2566 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:10.146524 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:10.146445 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:10.169382 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:10.169349 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:10.226678 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:10.226656 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:10.486954 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:10.486871 2566 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:10.486954 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:10.486901 2566 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-136-172.ec2.internal" not found Apr 23 17:54:11.171861 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:11.171812 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:11.898806 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:11.898777 2566 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:54:12.249909 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.249838 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:12.250908 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.250893 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:12.251000 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.250921 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:12.251000 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.250931 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:12.251166 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.251153 2566 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-172.ec2.internal\" not found" node="ip-10-0-136-172.ec2.internal" Apr 23 17:54:12.251214 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.251199 2566 scope.go:117] "RemoveContainer" containerID="6a71ba03dee221f9bd2a17cfb6108fb69eb3264565066869351d024e28c61fc0" Apr 23 17:54:12.251354 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.251338 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:54:12.255413 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.255398 2566 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:12.256185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.256170 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:12.256237 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.256197 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:12.256237 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.256208 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:12.256237 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.256229 2566 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:54:12.264204 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:12.264190 2566 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-136-172.ec2.internal" Apr 23 17:54:12.264262 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.264210 2566 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-136-172.ec2.internal\": node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:12.275184 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.275166 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:12.375602 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.375576 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:12.475919 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.475893 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:12.576575 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.576546 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:12.676692 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.676651 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:12.777172 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.777132 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:12.877762 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.877693 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:12.978194 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:12.978157 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:13.078875 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:13.078842 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:13.164633 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:13.164557 2566 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 23 17:54:13.173791 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:13.173764 2566 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:54:13.178971 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:13.178953 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:13.261964 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:13.261939 2566 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-rnl88" Apr 23 17:54:13.269254 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:13.269236 2566 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-rnl88" Apr 23 17:54:13.279762 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:13.279742 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:13.380270 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:13.380244 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:13.480910 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:13.480843 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:13.581513 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:13.581473 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:13.682576 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:13.682541 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:13.783075 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:13.783018 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:13.883658 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:13.883633 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:13.984120 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:13.984081 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:14.085138 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:14.085111 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:14.185294 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:14.185257 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:14.269941 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:14.269895 2566 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:49:13 +0000 UTC" deadline="2027-11-01 02:07:38.231066723 +0000 UTC" Apr 23 17:54:14.269941 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:14.269935 2566 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13352h13m23.961134659s" Apr 23 17:54:14.286175 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:14.286156 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:14.386807 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:14.386734 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:14.487205 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:14.487154 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:14.588052 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:14.588009 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:14.688570 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:14.688498 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:14.789559 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:14.789518 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:14.890054 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:14.890017 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:14.990532 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:14.990468 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:15.091115 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:15.091077 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:15.191871 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:15.191830 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:15.292037 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:15.292006 2566 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-172.ec2.internal\" not found" Apr 23 17:54:15.322145 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:15.322121 2566 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:15.403024 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:15.403000 2566 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" Apr 23 17:54:15.419295 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:15.419271 2566 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:54:15.421944 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:15.421928 2566 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal" Apr 23 17:54:15.431600 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:15.431580 2566 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:54:16.137931 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.137888 2566 apiserver.go:52] "Watching apiserver" Apr 23 17:54:16.146340 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.146295 2566 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 23 17:54:16.146646 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.146624 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-jhvgn","kube-system/konnectivity-agent-kjt2w","kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn","openshift-cluster-node-tuning-operator/tuned-6b8hr","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal","openshift-multus/multus-48gh2","openshift-multus/network-metrics-daemon-96rvc","openshift-dns/node-resolver-msx9j","openshift-image-registry/node-ca-d4hwd","openshift-multus/multus-additional-cni-plugins-b4p7v","openshift-network-diagnostics/network-check-target-jd2kh","openshift-network-operator/iptables-alerter-slvjl","openshift-ovn-kubernetes/ovnkube-node-7wdbp"] Apr 23 17:54:16.151025 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.150998 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.152990 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.152972 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.153175 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.153151 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.153607 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.153584 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 23 17:54:16.153730 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.153711 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 23 17:54:16.153859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.153840 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-97zm5\"" Apr 23 17:54:16.154254 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.154241 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 23 17:54:16.154483 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.154470 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 23 17:54:16.154731 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.154715 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:16.154905 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.154889 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 23 17:54:16.155000 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.154928 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 23 17:54:16.155000 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.154936 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-9wk7z\"" Apr 23 17:54:16.155239 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.155219 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.155354 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.155276 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-rkhpr\"" Apr 23 17:54:16.155592 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.155578 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 23 17:54:16.155797 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.155778 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 23 17:54:16.155915 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.155780 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 23 17:54:16.155915 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.155907 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 23 17:54:16.156032 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.155926 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 23 17:54:16.156095 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.156028 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:16.157109 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.157089 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 23 17:54:16.157367 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.157350 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 23 17:54:16.157460 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.157405 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-2224b\"" Apr 23 17:54:16.157460 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.157421 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 23 17:54:16.157583 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.157563 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.159470 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.159449 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:16.159606 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.159589 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-vs95g\"" Apr 23 17:54:16.159679 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.159647 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:16.159746 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.159692 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:16.159805 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.159750 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:16.161927 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.161910 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.163647 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.163630 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-vlpfs\"" Apr 23 17:54:16.163917 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.163897 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 23 17:54:16.164119 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.164102 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 23 17:54:16.166240 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.166214 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:16.166380 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.166276 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:16.168515 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.168499 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:16.168603 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.168590 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.170252 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.170158 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 23 17:54:16.170252 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.170241 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 23 17:54:16.170419 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.170262 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-wkbd6\"" Apr 23 17:54:16.170668 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.170573 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 23 17:54:16.170668 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.170609 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 23 17:54:16.170668 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.170640 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 23 17:54:16.170668 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.170645 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4x9xz\"" Apr 23 17:54:16.171030 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.170805 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.173274 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.173174 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:16.173274 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.173209 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 23 17:54:16.173274 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.173233 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-72dm2\"" Apr 23 17:54:16.173274 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.173240 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:16.173274 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.173256 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 23 17:54:16.173574 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.173256 2566 scope.go:117] "RemoveContainer" containerID="6a71ba03dee221f9bd2a17cfb6108fb69eb3264565066869351d024e28c61fc0" Apr 23 17:54:16.173574 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.173481 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:54:16.203817 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.203799 2566 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 17:54:16.242772 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.242718 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-172.ec2.internal" podStartSLOduration=1.242704109 podStartE2EDuration="1.242704109s" podCreationTimestamp="2026-04-23 17:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:54:16.23064097 +0000 UTC m=+115.525919566" watchObservedRunningTime="2026-04-23 17:54:16.242704109 +0000 UTC m=+115.537982705" Apr 23 17:54:16.289208 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289172 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l87qw\" (UniqueName: \"kubernetes.io/projected/0eebe585-3752-4ef2-ba49-6f427a3ebdce-kube-api-access-l87qw\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.289208 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289204 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:16.289443 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289220 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-system-cni-dir\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.289443 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289236 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-socket-dir-parent\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.289443 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289253 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-sys\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.289443 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289268 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn6x6\" (UniqueName: \"kubernetes.io/projected/88671ae9-14c3-476e-98a0-61200eda94f5-kube-api-access-wn6x6\") pod \"node-resolver-msx9j\" (UID: \"88671ae9-14c3-476e-98a0-61200eda94f5\") " pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.289443 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289325 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-etc-openvswitch\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.289443 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289383 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-etc-kubernetes\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.289443 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289413 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-registration-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289452 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-sysconfig\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289481 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/88671ae9-14c3-476e-98a0-61200eda94f5-hosts-file\") pod \"node-resolver-msx9j\" (UID: \"88671ae9-14c3-476e-98a0-61200eda94f5\") " pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289507 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-log-socket\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289522 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289561 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-cnibin\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289591 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-tuned\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289617 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-ovn-node-metrics-cert\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289645 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndg7v\" (UniqueName: \"kubernetes.io/projected/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-kube-api-access-ndg7v\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289670 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fdbz\" (UniqueName: \"kubernetes.io/projected/bd91136a-6313-4cae-bd06-a32a9ec8e0cb-kube-api-access-2fdbz\") pod \"node-ca-d4hwd\" (UID: \"bd91136a-6313-4cae-bd06-a32a9ec8e0cb\") " pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289694 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/88671ae9-14c3-476e-98a0-61200eda94f5-tmp-dir\") pod \"node-resolver-msx9j\" (UID: \"88671ae9-14c3-476e-98a0-61200eda94f5\") " pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289721 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-run-netns\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.289758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289746 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-var-lib-openvswitch\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289769 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlspw\" (UniqueName: \"kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw\") pod \"network-check-target-jd2kh\" (UID: \"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc\") " pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289793 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd91136a-6313-4cae-bd06-a32a9ec8e0cb-host\") pod \"node-ca-d4hwd\" (UID: \"bd91136a-6313-4cae-bd06-a32a9ec8e0cb\") " pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289816 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd91136a-6313-4cae-bd06-a32a9ec8e0cb-serviceca\") pod \"node-ca-d4hwd\" (UID: \"bd91136a-6313-4cae-bd06-a32a9ec8e0cb\") " pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289839 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0eebe585-3752-4ef2-ba49-6f427a3ebdce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289863 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-os-release\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289887 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-kubelet\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289910 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-node-log\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289943 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-var-lib-cni-multus\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289959 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-hostroot\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289973 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-conf-dir\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.289986 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-device-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290000 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-systemd\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290013 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a8dcfc70-4d8f-4caa-a6df-98b824d34a78-host-slash\") pod \"iptables-alerter-slvjl\" (UID: \"a8dcfc70-4d8f-4caa-a6df-98b824d34a78\") " pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290039 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-run-systemd\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290057 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-etc-selinux\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.290230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290070 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-run-openvswitch\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290086 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-run-ovn-kubernetes\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290113 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290130 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-env-overrides\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290144 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0eebe585-3752-4ef2-ba49-6f427a3ebdce-cni-binary-copy\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290161 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/bf59011d-e01e-49f9-b468-33af8f5a6489-kubelet-config\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290198 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-var-lib-cni-bin\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290225 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-kubernetes\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290247 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/8f79dd76-5ae2-47b7-bd62-86d231ac80ff-agent-certs\") pod \"konnectivity-agent-kjt2w\" (UID: \"8f79dd76-5ae2-47b7-bd62-86d231ac80ff\") " pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290271 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/8f79dd76-5ae2-47b7-bd62-86d231ac80ff-konnectivity-ca\") pod \"konnectivity-agent-kjt2w\" (UID: \"8f79dd76-5ae2-47b7-bd62-86d231ac80ff\") " pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290317 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a8dcfc70-4d8f-4caa-a6df-98b824d34a78-iptables-alerter-script\") pod \"iptables-alerter-slvjl\" (UID: \"a8dcfc70-4d8f-4caa-a6df-98b824d34a78\") " pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290345 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-slash\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290368 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhmfh\" (UniqueName: \"kubernetes.io/projected/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-kube-api-access-lhmfh\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290403 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-cni-binary-copy\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290427 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-daemon-config\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290472 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-sysctl-d\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.290866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290504 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-host\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290522 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290538 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0eebe585-3752-4ef2-ba49-6f427a3ebdce-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290553 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-modprobe-d\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290567 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-systemd-units\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290581 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-run-ovn\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290596 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-cni-bin\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290613 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-cnibin\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290637 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-var-lib-kubelet\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290651 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfjfp\" (UniqueName: \"kubernetes.io/projected/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-kube-api-access-nfjfp\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290677 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-run\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290695 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-var-lib-kubelet\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290719 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-cni-dir\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290755 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-run-multus-certs\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290784 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-run-k8s-cni-cncf-io\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290809 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx9jx\" (UniqueName: \"kubernetes.io/projected/19833190-ba61-4f22-b8f2-00153c34b225-kube-api-access-dx9jx\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.291390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290831 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-sysctl-conf\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290859 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-os-release\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290879 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-kubelet-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290896 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-lib-modules\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290916 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-tmp\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290936 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpj6p\" (UniqueName: \"kubernetes.io/projected/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-kube-api-access-dpj6p\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290958 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxrrl\" (UniqueName: \"kubernetes.io/projected/a8dcfc70-4d8f-4caa-a6df-98b824d34a78-kube-api-access-zxrrl\") pod \"iptables-alerter-slvjl\" (UID: \"a8dcfc70-4d8f-4caa-a6df-98b824d34a78\") " pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.290972 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-ovnkube-config\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.291006 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/bf59011d-e01e-49f9-b468-33af8f5a6489-dbus\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.291030 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-run-netns\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.291053 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-socket-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.291066 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-sys-fs\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.291087 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-cni-netd\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.291101 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-ovnkube-script-lib\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.291829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.291114 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-system-cni-dir\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.391573 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391494 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-run-ovn\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.391573 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391522 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-cni-bin\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.391573 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391537 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-cnibin\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.391573 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391552 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-var-lib-kubelet\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391575 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nfjfp\" (UniqueName: \"kubernetes.io/projected/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-kube-api-access-nfjfp\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391600 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-run\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391613 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-run-ovn\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391623 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-var-lib-kubelet\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391599 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-cni-bin\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391645 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-cni-dir\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391653 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-cnibin\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391659 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-var-lib-kubelet\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391669 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-run-multus-certs\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391667 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-run\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391682 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-var-lib-kubelet\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391694 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-run-k8s-cni-cncf-io\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391701 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-run-multus-certs\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391715 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dx9jx\" (UniqueName: \"kubernetes.io/projected/19833190-ba61-4f22-b8f2-00153c34b225-kube-api-access-dx9jx\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391721 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-cni-dir\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391733 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-sysctl-conf\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391767 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-os-release\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.391869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391772 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-run-k8s-cni-cncf-io\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391797 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-kubelet-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391824 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-lib-modules\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391839 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-sysctl-conf\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391846 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-tmp\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391866 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-os-release\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391882 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-kubelet-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391886 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dpj6p\" (UniqueName: \"kubernetes.io/projected/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-kube-api-access-dpj6p\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391921 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zxrrl\" (UniqueName: \"kubernetes.io/projected/a8dcfc70-4d8f-4caa-a6df-98b824d34a78-kube-api-access-zxrrl\") pod \"iptables-alerter-slvjl\" (UID: \"a8dcfc70-4d8f-4caa-a6df-98b824d34a78\") " pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391945 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-ovnkube-config\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391955 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-lib-modules\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391969 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/bf59011d-e01e-49f9-b468-33af8f5a6489-dbus\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.391993 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-run-netns\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392014 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-socket-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392056 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-run-netns\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392061 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-sys-fs\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392090 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-cni-netd\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.392672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392117 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-ovnkube-script-lib\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392145 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-system-cni-dir\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392172 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l87qw\" (UniqueName: \"kubernetes.io/projected/0eebe585-3752-4ef2-ba49-6f427a3ebdce-kube-api-access-l87qw\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392177 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-sys-fs\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392179 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-cni-netd\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392197 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392232 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-system-cni-dir\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392256 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-socket-dir-parent\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392266 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-socket-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392278 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-sys\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392326 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wn6x6\" (UniqueName: \"kubernetes.io/projected/88671ae9-14c3-476e-98a0-61200eda94f5-kube-api-access-wn6x6\") pod \"node-resolver-msx9j\" (UID: \"88671ae9-14c3-476e-98a0-61200eda94f5\") " pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392352 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-etc-openvswitch\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392360 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-system-cni-dir\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392374 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-etc-kubernetes\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392322 2566 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392402 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-registration-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.392417 2566 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392428 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-sysconfig\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.393387 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392441 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-system-cni-dir\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392448 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/88671ae9-14c3-476e-98a0-61200eda94f5-hosts-file\") pod \"node-resolver-msx9j\" (UID: \"88671ae9-14c3-476e-98a0-61200eda94f5\") " pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392142 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/bf59011d-e01e-49f9-b468-33af8f5a6489-dbus\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.392493 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret podName:bf59011d-e01e-49f9-b468-33af8f5a6489 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:16.892464968 +0000 UTC m=+116.187743570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret") pod "global-pull-secret-syncer-jhvgn" (UID: "bf59011d-e01e-49f9-b468-33af8f5a6489") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392515 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-registration-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392491 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-etc-openvswitch\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392529 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-sys\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392491 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-socket-dir-parent\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392515 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-log-socket\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392973 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392984 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-ovnkube-script-lib\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393008 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-cnibin\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393063 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-tuned\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393065 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-sysconfig\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.392898 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-ovnkube-config\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393087 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-log-socket\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393330 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-cnibin\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.394185 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.393426 2566 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.393474 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs podName:ec0108e4-36f5-4959-99b0-8fe6326c7aaa nodeName:}" failed. No retries permitted until 2026-04-23 17:54:16.893459798 +0000 UTC m=+116.188738372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs") pod "network-metrics-daemon-96rvc" (UID: "ec0108e4-36f5-4959-99b0-8fe6326c7aaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393721 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-etc-kubernetes\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393778 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/88671ae9-14c3-476e-98a0-61200eda94f5-hosts-file\") pod \"node-resolver-msx9j\" (UID: \"88671ae9-14c3-476e-98a0-61200eda94f5\") " pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393096 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-ovn-node-metrics-cert\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393900 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ndg7v\" (UniqueName: \"kubernetes.io/projected/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-kube-api-access-ndg7v\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393931 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2fdbz\" (UniqueName: \"kubernetes.io/projected/bd91136a-6313-4cae-bd06-a32a9ec8e0cb-kube-api-access-2fdbz\") pod \"node-ca-d4hwd\" (UID: \"bd91136a-6313-4cae-bd06-a32a9ec8e0cb\") " pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393969 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/88671ae9-14c3-476e-98a0-61200eda94f5-tmp-dir\") pod \"node-resolver-msx9j\" (UID: \"88671ae9-14c3-476e-98a0-61200eda94f5\") " pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.393997 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-run-netns\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394025 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-var-lib-openvswitch\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394049 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlspw\" (UniqueName: \"kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw\") pod \"network-check-target-jd2kh\" (UID: \"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc\") " pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394075 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd91136a-6313-4cae-bd06-a32a9ec8e0cb-host\") pod \"node-ca-d4hwd\" (UID: \"bd91136a-6313-4cae-bd06-a32a9ec8e0cb\") " pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394103 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd91136a-6313-4cae-bd06-a32a9ec8e0cb-serviceca\") pod \"node-ca-d4hwd\" (UID: \"bd91136a-6313-4cae-bd06-a32a9ec8e0cb\") " pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394135 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0eebe585-3752-4ef2-ba49-6f427a3ebdce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394163 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-os-release\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394186 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-kubelet\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394216 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-node-log\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.394937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394243 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-var-lib-cni-multus\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394271 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-hostroot\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394297 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-conf-dir\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394341 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-device-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394367 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-systemd\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394394 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a8dcfc70-4d8f-4caa-a6df-98b824d34a78-host-slash\") pod \"iptables-alerter-slvjl\" (UID: \"a8dcfc70-4d8f-4caa-a6df-98b824d34a78\") " pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394424 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-run-systemd\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394453 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-etc-selinux\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394476 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-run-openvswitch\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394510 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-run-ovn-kubernetes\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394545 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394621 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-env-overrides\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394744 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0eebe585-3752-4ef2-ba49-6f427a3ebdce-cni-binary-copy\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394815 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/bf59011d-e01e-49f9-b468-33af8f5a6489-kubelet-config\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394847 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-var-lib-cni-bin\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394874 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-kubernetes\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394902 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/8f79dd76-5ae2-47b7-bd62-86d231ac80ff-agent-certs\") pod \"konnectivity-agent-kjt2w\" (UID: \"8f79dd76-5ae2-47b7-bd62-86d231ac80ff\") " pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:16.395677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394935 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/8f79dd76-5ae2-47b7-bd62-86d231ac80ff-konnectivity-ca\") pod \"konnectivity-agent-kjt2w\" (UID: \"8f79dd76-5ae2-47b7-bd62-86d231ac80ff\") " pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394965 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a8dcfc70-4d8f-4caa-a6df-98b824d34a78-iptables-alerter-script\") pod \"iptables-alerter-slvjl\" (UID: \"a8dcfc70-4d8f-4caa-a6df-98b824d34a78\") " pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.394995 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-slash\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395025 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lhmfh\" (UniqueName: \"kubernetes.io/projected/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-kube-api-access-lhmfh\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395050 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-cni-binary-copy\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395083 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-daemon-config\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395117 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-sysctl-d\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395147 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-host\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395177 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395206 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0eebe585-3752-4ef2-ba49-6f427a3ebdce-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395233 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-run-systemd\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395239 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-modprobe-d\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395271 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-systemd-units\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395394 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-systemd-units\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395494 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-etc-selinux\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395541 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-run-openvswitch\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395780 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-run-ovn-kubernetes\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.396417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395804 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/88671ae9-14c3-476e-98a0-61200eda94f5-tmp-dir\") pod \"node-resolver-msx9j\" (UID: \"88671ae9-14c3-476e-98a0-61200eda94f5\") " pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.397143 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395838 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.397143 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395894 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-run-netns\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.397143 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395909 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/bf59011d-e01e-49f9-b468-33af8f5a6489-kubelet-config\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:16.397143 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.395962 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-var-lib-cni-bin\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.397143 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.396015 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-kubernetes\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.397143 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.396278 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-env-overrides\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.397143 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.396857 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-slash\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.397143 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.397104 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-var-lib-openvswitch\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.397514 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.397259 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/8f79dd76-5ae2-47b7-bd62-86d231ac80ff-konnectivity-ca\") pod \"konnectivity-agent-kjt2w\" (UID: \"8f79dd76-5ae2-47b7-bd62-86d231ac80ff\") " pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:16.397514 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.397419 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd91136a-6313-4cae-bd06-a32a9ec8e0cb-host\") pod \"node-ca-d4hwd\" (UID: \"bd91136a-6313-4cae-bd06-a32a9ec8e0cb\") " pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.397622 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-host\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.397788 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-daemon-config\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.397809 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0eebe585-3752-4ef2-ba49-6f427a3ebdce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.397876 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0eebe585-3752-4ef2-ba49-6f427a3ebdce-cni-binary-copy\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.397979 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-sysctl-d\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398023 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0eebe585-3752-4ef2-ba49-6f427a3ebdce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398048 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd91136a-6313-4cae-bd06-a32a9ec8e0cb-serviceca\") pod \"node-ca-d4hwd\" (UID: \"bd91136a-6313-4cae-bd06-a32a9ec8e0cb\") " pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398134 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-os-release\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398147 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-tuned\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398214 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-multus-conf-dir\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398268 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-node-log\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398333 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-host-kubelet\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.398369 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398345 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-host-var-lib-cni-multus\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.398988 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398399 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-hostroot\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.398988 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398399 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/19833190-ba61-4f22-b8f2-00153c34b225-device-dir\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.398988 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398447 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-systemd\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.398988 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398488 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a8dcfc70-4d8f-4caa-a6df-98b824d34a78-host-slash\") pod \"iptables-alerter-slvjl\" (UID: \"a8dcfc70-4d8f-4caa-a6df-98b824d34a78\") " pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.398988 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398826 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-ovn-node-metrics-cert\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.398988 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.398986 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-cni-binary-copy\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.399266 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.399066 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a8dcfc70-4d8f-4caa-a6df-98b824d34a78-iptables-alerter-script\") pod \"iptables-alerter-slvjl\" (UID: \"a8dcfc70-4d8f-4caa-a6df-98b824d34a78\") " pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.399266 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.399103 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-etc-modprobe-d\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.399386 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.399350 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-tmp\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.399621 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.399593 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0eebe585-3752-4ef2-ba49-6f427a3ebdce-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.399762 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.399744 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/8f79dd76-5ae2-47b7-bd62-86d231ac80ff-agent-certs\") pod \"konnectivity-agent-kjt2w\" (UID: \"8f79dd76-5ae2-47b7-bd62-86d231ac80ff\") " pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:16.401078 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.401054 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfjfp\" (UniqueName: \"kubernetes.io/projected/2f0abbbd-0b22-4bf4-828e-8e3f05035c84-kube-api-access-nfjfp\") pod \"multus-48gh2\" (UID: \"2f0abbbd-0b22-4bf4-828e-8e3f05035c84\") " pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.402802 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.402101 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:16.402802 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.402123 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:16.402802 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.402138 2566 projected.go:194] Error preparing data for projected volume kube-api-access-qlspw for pod openshift-network-diagnostics/network-check-target-jd2kh: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:16.402802 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.402328 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw podName:2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc nodeName:}" failed. No retries permitted until 2026-04-23 17:54:16.902233518 +0000 UTC m=+116.197512096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qlspw" (UniqueName: "kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw") pod "network-check-target-jd2kh" (UID: "2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:16.403066 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.402933 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpj6p\" (UniqueName: \"kubernetes.io/projected/bbd132ba-580f-4003-8b35-f82ad6b7ccf0-kube-api-access-dpj6p\") pod \"tuned-6b8hr\" (UID: \"bbd132ba-580f-4003-8b35-f82ad6b7ccf0\") " pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.403842 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.403807 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx9jx\" (UniqueName: \"kubernetes.io/projected/19833190-ba61-4f22-b8f2-00153c34b225-kube-api-access-dx9jx\") pod \"aws-ebs-csi-driver-node-c75rn\" (UID: \"19833190-ba61-4f22-b8f2-00153c34b225\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.403959 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.403902 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn6x6\" (UniqueName: \"kubernetes.io/projected/88671ae9-14c3-476e-98a0-61200eda94f5-kube-api-access-wn6x6\") pod \"node-resolver-msx9j\" (UID: \"88671ae9-14c3-476e-98a0-61200eda94f5\") " pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.404743 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.404721 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndg7v\" (UniqueName: \"kubernetes.io/projected/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-kube-api-access-ndg7v\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:16.404998 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.404980 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fdbz\" (UniqueName: \"kubernetes.io/projected/bd91136a-6313-4cae-bd06-a32a9ec8e0cb-kube-api-access-2fdbz\") pod \"node-ca-d4hwd\" (UID: \"bd91136a-6313-4cae-bd06-a32a9ec8e0cb\") " pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.405553 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.405534 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxrrl\" (UniqueName: \"kubernetes.io/projected/a8dcfc70-4d8f-4caa-a6df-98b824d34a78-kube-api-access-zxrrl\") pod \"iptables-alerter-slvjl\" (UID: \"a8dcfc70-4d8f-4caa-a6df-98b824d34a78\") " pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.405669 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.405652 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l87qw\" (UniqueName: \"kubernetes.io/projected/0eebe585-3752-4ef2-ba49-6f427a3ebdce-kube-api-access-l87qw\") pod \"multus-additional-cni-plugins-b4p7v\" (UID: \"0eebe585-3752-4ef2-ba49-6f427a3ebdce\") " pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.405824 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.405805 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhmfh\" (UniqueName: \"kubernetes.io/projected/ca2e53d1-74cd-4370-b1cd-1bb46d1f5076-kube-api-access-lhmfh\") pod \"ovnkube-node-7wdbp\" (UID: \"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076\") " pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.461680 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.461644 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-48gh2" Apr 23 17:54:16.467354 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.467325 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-slvjl" Apr 23 17:54:16.468106 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:16.467997 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f0abbbd_0b22_4bf4_828e_8e3f05035c84.slice/crio-bbe6e047f828b77507a1a0a7e2653fd0aa91294b3c2d03577343399888884f1b WatchSource:0}: Error finding container bbe6e047f828b77507a1a0a7e2653fd0aa91294b3c2d03577343399888884f1b: Status 404 returned error can't find the container with id bbe6e047f828b77507a1a0a7e2653fd0aa91294b3c2d03577343399888884f1b Apr 23 17:54:16.473031 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.473009 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:16.474140 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:16.474112 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8dcfc70_4d8f_4caa_a6df_98b824d34a78.slice/crio-f1ec1a548709dc545243208c18d1d7df419cfae1f5efd3ba310d90bed4dd2368 WatchSource:0}: Error finding container f1ec1a548709dc545243208c18d1d7df419cfae1f5efd3ba310d90bed4dd2368: Status 404 returned error can't find the container with id f1ec1a548709dc545243208c18d1d7df419cfae1f5efd3ba310d90bed4dd2368 Apr 23 17:54:16.478326 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.478286 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" Apr 23 17:54:16.479251 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:16.479209 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca2e53d1_74cd_4370_b1cd_1bb46d1f5076.slice/crio-d2895497ad36d22e455fa7e3358a798ad6b5851f6ec3e57c5934a240317dae34 WatchSource:0}: Error finding container d2895497ad36d22e455fa7e3358a798ad6b5851f6ec3e57c5934a240317dae34: Status 404 returned error can't find the container with id d2895497ad36d22e455fa7e3358a798ad6b5851f6ec3e57c5934a240317dae34 Apr 23 17:54:16.483834 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.483814 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" Apr 23 17:54:16.484879 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:16.484852 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19833190_ba61_4f22_b8f2_00153c34b225.slice/crio-a6431b1859224a266ec2a64e746e94976cb77a89499bd9122030dd3a0a465224 WatchSource:0}: Error finding container a6431b1859224a266ec2a64e746e94976cb77a89499bd9122030dd3a0a465224: Status 404 returned error can't find the container with id a6431b1859224a266ec2a64e746e94976cb77a89499bd9122030dd3a0a465224 Apr 23 17:54:16.488368 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.488352 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-msx9j" Apr 23 17:54:16.492647 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:16.492623 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbd132ba_580f_4003_8b35_f82ad6b7ccf0.slice/crio-f8d5d2d0ee1b20db1f37f049bf4559bb923cb4db57b1956386f76c348c84d43b WatchSource:0}: Error finding container f8d5d2d0ee1b20db1f37f049bf4559bb923cb4db57b1956386f76c348c84d43b: Status 404 returned error can't find the container with id f8d5d2d0ee1b20db1f37f049bf4559bb923cb4db57b1956386f76c348c84d43b Apr 23 17:54:16.493774 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.493665 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:16.495800 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:16.495777 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88671ae9_14c3_476e_98a0_61200eda94f5.slice/crio-14033d8ca26dc7ed3faadb04a0e7dffa60791f7a3ef8244ef577cef6a06358d9 WatchSource:0}: Error finding container 14033d8ca26dc7ed3faadb04a0e7dffa60791f7a3ef8244ef577cef6a06358d9: Status 404 returned error can't find the container with id 14033d8ca26dc7ed3faadb04a0e7dffa60791f7a3ef8244ef577cef6a06358d9 Apr 23 17:54:16.499709 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.499689 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d4hwd" Apr 23 17:54:16.500355 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:16.500336 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f79dd76_5ae2_47b7_bd62_86d231ac80ff.slice/crio-311ef9dbc0d664dfe5debbee9242bb90e5bd6dc39f66b9cfcb69eb8e74d5a537 WatchSource:0}: Error finding container 311ef9dbc0d664dfe5debbee9242bb90e5bd6dc39f66b9cfcb69eb8e74d5a537: Status 404 returned error can't find the container with id 311ef9dbc0d664dfe5debbee9242bb90e5bd6dc39f66b9cfcb69eb8e74d5a537 Apr 23 17:54:16.503100 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.503086 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" Apr 23 17:54:16.507573 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:16.507547 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd91136a_6313_4cae_bd06_a32a9ec8e0cb.slice/crio-353fb53d71f96ceb4a85ae7a5e63a457df395ab838c5d59eddf97b34e2b4b215 WatchSource:0}: Error finding container 353fb53d71f96ceb4a85ae7a5e63a457df395ab838c5d59eddf97b34e2b4b215: Status 404 returned error can't find the container with id 353fb53d71f96ceb4a85ae7a5e63a457df395ab838c5d59eddf97b34e2b4b215 Apr 23 17:54:16.511833 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:16.511815 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0eebe585_3752_4ef2_ba49_6f427a3ebdce.slice/crio-a646d925489bcd12096550099b98c65d62517cf8523cfbab3ece7bfa568dca57 WatchSource:0}: Error finding container a646d925489bcd12096550099b98c65d62517cf8523cfbab3ece7bfa568dca57: Status 404 returned error can't find the container with id a646d925489bcd12096550099b98c65d62517cf8523cfbab3ece7bfa568dca57 Apr 23 17:54:16.898711 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.898436 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:16.898711 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.898488 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:16.898711 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.898607 2566 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:16.898711 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.898653 2566 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:16.898711 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.898687 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret podName:bf59011d-e01e-49f9-b468-33af8f5a6489 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:17.8986673 +0000 UTC m=+117.193945888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret") pod "global-pull-secret-syncer-jhvgn" (UID: "bf59011d-e01e-49f9-b468-33af8f5a6489") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:16.898711 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.898710 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs podName:ec0108e4-36f5-4959-99b0-8fe6326c7aaa nodeName:}" failed. No retries permitted until 2026-04-23 17:54:17.898700534 +0000 UTC m=+117.193979124 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs") pod "network-metrics-daemon-96rvc" (UID: "ec0108e4-36f5-4959-99b0-8fe6326c7aaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:17.000033 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:16.999595 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlspw\" (UniqueName: \"kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw\") pod \"network-check-target-jd2kh\" (UID: \"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc\") " pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:17.000033 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.999763 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:17.000033 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.999784 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:17.000033 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.999796 2566 projected.go:194] Error preparing data for projected volume kube-api-access-qlspw for pod openshift-network-diagnostics/network-check-target-jd2kh: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:17.000033 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:16.999852 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw podName:2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc nodeName:}" failed. No retries permitted until 2026-04-23 17:54:17.99983458 +0000 UTC m=+117.295113160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qlspw" (UniqueName: "kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw") pod "network-check-target-jd2kh" (UID: "2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:17.442050 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.439944 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-kjt2w" event={"ID":"8f79dd76-5ae2-47b7-bd62-86d231ac80ff","Type":"ContainerStarted","Data":"311ef9dbc0d664dfe5debbee9242bb90e5bd6dc39f66b9cfcb69eb8e74d5a537"} Apr 23 17:54:17.447489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.447454 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" event={"ID":"bbd132ba-580f-4003-8b35-f82ad6b7ccf0","Type":"ContainerStarted","Data":"f8d5d2d0ee1b20db1f37f049bf4559bb923cb4db57b1956386f76c348c84d43b"} Apr 23 17:54:17.454738 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.454614 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-slvjl" event={"ID":"a8dcfc70-4d8f-4caa-a6df-98b824d34a78","Type":"ContainerStarted","Data":"f1ec1a548709dc545243208c18d1d7df419cfae1f5efd3ba310d90bed4dd2368"} Apr 23 17:54:17.469360 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.468590 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-48gh2" event={"ID":"2f0abbbd-0b22-4bf4-828e-8e3f05035c84","Type":"ContainerStarted","Data":"bbe6e047f828b77507a1a0a7e2653fd0aa91294b3c2d03577343399888884f1b"} Apr 23 17:54:17.481504 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.481478 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" event={"ID":"0eebe585-3752-4ef2-ba49-6f427a3ebdce","Type":"ContainerStarted","Data":"a646d925489bcd12096550099b98c65d62517cf8523cfbab3ece7bfa568dca57"} Apr 23 17:54:17.499040 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.499008 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-msx9j" event={"ID":"88671ae9-14c3-476e-98a0-61200eda94f5","Type":"ContainerStarted","Data":"14033d8ca26dc7ed3faadb04a0e7dffa60791f7a3ef8244ef577cef6a06358d9"} Apr 23 17:54:17.513585 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.513512 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" event={"ID":"19833190-ba61-4f22-b8f2-00153c34b225","Type":"ContainerStarted","Data":"a6431b1859224a266ec2a64e746e94976cb77a89499bd9122030dd3a0a465224"} Apr 23 17:54:17.529211 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.529154 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" event={"ID":"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076","Type":"ContainerStarted","Data":"d2895497ad36d22e455fa7e3358a798ad6b5851f6ec3e57c5934a240317dae34"} Apr 23 17:54:17.539739 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.539712 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d4hwd" event={"ID":"bd91136a-6313-4cae-bd06-a32a9ec8e0cb","Type":"ContainerStarted","Data":"353fb53d71f96ceb4a85ae7a5e63a457df395ab838c5d59eddf97b34e2b4b215"} Apr 23 17:54:17.907927 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.907894 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:17.908107 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:17.907941 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:17.908107 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:17.908065 2566 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:17.908217 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:17.908183 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs podName:ec0108e4-36f5-4959-99b0-8fe6326c7aaa nodeName:}" failed. No retries permitted until 2026-04-23 17:54:19.908164388 +0000 UTC m=+119.203442969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs") pod "network-metrics-daemon-96rvc" (UID: "ec0108e4-36f5-4959-99b0-8fe6326c7aaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:17.908775 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:17.908734 2566 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:17.908900 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:17.908797 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret podName:bf59011d-e01e-49f9-b468-33af8f5a6489 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:19.908780355 +0000 UTC m=+119.204058944 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret") pod "global-pull-secret-syncer-jhvgn" (UID: "bf59011d-e01e-49f9-b468-33af8f5a6489") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:18.009094 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:18.009016 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlspw\" (UniqueName: \"kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw\") pod \"network-check-target-jd2kh\" (UID: \"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc\") " pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:18.009251 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:18.009158 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:18.009251 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:18.009182 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:18.009251 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:18.009196 2566 projected.go:194] Error preparing data for projected volume kube-api-access-qlspw for pod openshift-network-diagnostics/network-check-target-jd2kh: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:18.009448 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:18.009254 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw podName:2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc nodeName:}" failed. No retries permitted until 2026-04-23 17:54:20.009235878 +0000 UTC m=+119.304514456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qlspw" (UniqueName: "kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw") pod "network-check-target-jd2kh" (UID: "2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:18.249775 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:18.249691 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:18.249930 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:18.249826 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:18.250272 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:18.250250 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:18.250402 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:18.250379 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:18.250469 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:18.250454 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:18.250544 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:18.250528 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:19.923429 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:19.923386 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:19.924052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:19.923457 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:19.924052 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:19.923619 2566 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:19.924052 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:19.923694 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs podName:ec0108e4-36f5-4959-99b0-8fe6326c7aaa nodeName:}" failed. No retries permitted until 2026-04-23 17:54:23.923674369 +0000 UTC m=+123.218953003 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs") pod "network-metrics-daemon-96rvc" (UID: "ec0108e4-36f5-4959-99b0-8fe6326c7aaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:19.924225 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:19.924129 2566 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:19.924225 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:19.924180 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret podName:bf59011d-e01e-49f9-b468-33af8f5a6489 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:23.924164724 +0000 UTC m=+123.219443302 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret") pod "global-pull-secret-syncer-jhvgn" (UID: "bf59011d-e01e-49f9-b468-33af8f5a6489") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:20.025205 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:20.024559 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlspw\" (UniqueName: \"kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw\") pod \"network-check-target-jd2kh\" (UID: \"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc\") " pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:20.025205 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:20.024766 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:20.025205 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:20.024789 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:20.025205 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:20.024802 2566 projected.go:194] Error preparing data for projected volume kube-api-access-qlspw for pod openshift-network-diagnostics/network-check-target-jd2kh: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:20.025205 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:20.024862 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw podName:2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc nodeName:}" failed. No retries permitted until 2026-04-23 17:54:24.024843831 +0000 UTC m=+123.320122433 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qlspw" (UniqueName: "kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw") pod "network-check-target-jd2kh" (UID: "2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:20.249642 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:20.249567 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:20.249642 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:20.249600 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:20.249857 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:20.249569 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:20.249857 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:20.249706 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:20.249970 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:20.249929 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:20.250127 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:20.250028 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:21.172508 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:21.172470 2566 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Apr 23 17:54:21.185558 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:21.185508 2566 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:22.249361 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:22.249326 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:22.249831 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:22.249368 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:22.249831 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:22.249331 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:22.249831 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:22.249466 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:22.249831 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:22.249582 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:22.249831 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:22.249659 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:23.957453 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:23.957356 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:23.957915 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:23.957462 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:23.957915 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:23.957583 2566 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:23.957915 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:23.957647 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret podName:bf59011d-e01e-49f9-b468-33af8f5a6489 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:31.957627448 +0000 UTC m=+131.252906027 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret") pod "global-pull-secret-syncer-jhvgn" (UID: "bf59011d-e01e-49f9-b468-33af8f5a6489") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:23.958079 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:23.958047 2566 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:23.958127 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:23.958092 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs podName:ec0108e4-36f5-4959-99b0-8fe6326c7aaa nodeName:}" failed. No retries permitted until 2026-04-23 17:54:31.958077602 +0000 UTC m=+131.253356181 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs") pod "network-metrics-daemon-96rvc" (UID: "ec0108e4-36f5-4959-99b0-8fe6326c7aaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:24.059430 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:24.058825 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlspw\" (UniqueName: \"kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw\") pod \"network-check-target-jd2kh\" (UID: \"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc\") " pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:24.059430 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:24.058993 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:24.059430 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:24.059014 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:24.059430 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:24.059027 2566 projected.go:194] Error preparing data for projected volume kube-api-access-qlspw for pod openshift-network-diagnostics/network-check-target-jd2kh: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:24.059430 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:24.059090 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw podName:2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc nodeName:}" failed. No retries permitted until 2026-04-23 17:54:32.05907274 +0000 UTC m=+131.354351328 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qlspw" (UniqueName: "kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw") pod "network-check-target-jd2kh" (UID: "2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:24.250191 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:24.250106 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:24.250385 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:24.250233 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:24.250919 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:24.250645 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:24.250919 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:24.250760 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:24.250919 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:24.250807 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:24.250919 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:24.250877 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:26.187062 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:26.187023 2566 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:26.249573 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:26.249531 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:26.249767 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:26.249542 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:26.249767 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:26.249668 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:26.249767 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:26.249742 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:26.249767 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:26.249540 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:26.249933 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:26.249843 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:28.250161 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:28.250124 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:28.250745 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:28.250243 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:28.250745 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:28.250251 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:28.250745 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:28.250382 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:28.250745 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:28.250427 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:28.250745 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:28.250483 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:28.251058 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:28.250824 2566 scope.go:117] "RemoveContainer" containerID="6a71ba03dee221f9bd2a17cfb6108fb69eb3264565066869351d024e28c61fc0" Apr 23 17:54:28.251058 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:28.251036 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:54:28.768401 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:28.768329 2566 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-driver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abb4d487159fbc9b7148c690cd3a6ee638680f8f879ff6195ca1be5b393705b0,Command:[],Args:[node --endpoint=$(CSI_ENDPOINT) --logtostderr --v=2 --reserved-volume-attachments=1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:healthz,HostPort:10300,ContainerPort:10300,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CSI_ENDPOINT,Value:unix:/csi/csi.sock,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:device-dir,ReadOnly:false,MountPath:/dev,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-selinux,ReadOnly:false,MountPath:/etc/selinux,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sys-fs,ReadOnly:false,MountPath:/sys/fs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx9jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aws-ebs-csi-driver-node-c75rn_openshift-cluster-csi-drivers(19833190-ba61-4f22-b8f2-00153c34b225): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 23 17:54:28.768581 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:28.768369 2566 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:konnectivity-agent,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9b333f47df911dd05a9a10cb8ea988e97d3174490b348b8e46a161d9581d64,Command:[/usr/bin/proxy-agent],Args:[--logtostderr=true --ca-cert /etc/konnectivity/ca/ca.crt --agent-cert /etc/konnectivity/agent/tls.crt --agent-key /etc/konnectivity/agent/tls.key --proxy-server-host konnectivity-server-clusters-2051200a-72fe-4cde-be91--86c3acc3.apps.kflux-prd-es01.1ion.p1.openshiftapps.com --proxy-server-port 443 --health-server-port 2041 --agent-identifiers=default-route=true --keepalive-time 30s --probe-interval 5s --sync-interval 5s --sync-interval-cap 30s --v 3],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HTTP_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:HTTPS_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{40 -3} {} 40m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:agent-certs,ReadOnly:false,MountPath:/etc/konnectivity/agent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:konnectivity-ca,ReadOnly:false,MountPath:/etc/konnectivity/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 2041 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:readyz,Port:{0 2041 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 2041 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod konnectivity-agent-kjt2w_kube-system(8f79dd76-5ae2-47b7-bd62-86d231ac80ff): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 23 17:54:28.769537 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:28.769510 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"konnectivity-agent\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/konnectivity-agent-kjt2w" podUID="8f79dd76-5ae2-47b7-bd62-86d231ac80ff" Apr 23 17:54:30.115925 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:30.115894 2566 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:30.249233 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:30.249182 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:30.249233 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:30.249215 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:30.249673 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:30.249334 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:30.249673 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:30.249333 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:30.249673 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:30.249404 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:30.249673 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:30.249489 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:31.187517 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:31.187478 2566 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:32.013180 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:32.013144 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:32.013370 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:32.013235 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:32.013370 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.013335 2566 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:32.013370 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.013338 2566 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:32.013502 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.013389 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret podName:bf59011d-e01e-49f9-b468-33af8f5a6489 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:48.01337218 +0000 UTC m=+147.308650754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret") pod "global-pull-secret-syncer-jhvgn" (UID: "bf59011d-e01e-49f9-b468-33af8f5a6489") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:32.013502 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.013409 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs podName:ec0108e4-36f5-4959-99b0-8fe6326c7aaa nodeName:}" failed. No retries permitted until 2026-04-23 17:54:48.013399899 +0000 UTC m=+147.308678479 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs") pod "network-metrics-daemon-96rvc" (UID: "ec0108e4-36f5-4959-99b0-8fe6326c7aaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:32.113757 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:32.113716 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlspw\" (UniqueName: \"kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw\") pod \"network-check-target-jd2kh\" (UID: \"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc\") " pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:32.113941 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.113893 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:32.113941 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.113916 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:32.113941 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.113927 2566 projected.go:194] Error preparing data for projected volume kube-api-access-qlspw for pod openshift-network-diagnostics/network-check-target-jd2kh: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:32.114093 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.113981 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw podName:2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc nodeName:}" failed. No retries permitted until 2026-04-23 17:54:48.113965156 +0000 UTC m=+147.409243731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qlspw" (UniqueName: "kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw") pod "network-check-target-jd2kh" (UID: "2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:32.250049 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:32.250011 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:32.250049 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:32.250040 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:32.250536 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:32.250011 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:32.250536 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.250127 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:32.250536 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.250194 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:32.250536 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:32.250325 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:34.006088 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.005804 2566 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:34.249272 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.249193 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:34.249272 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.249214 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:34.249272 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.249211 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:34.249529 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:34.249368 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:34.249529 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:34.249483 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:34.249636 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:34.249574 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:34.576963 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.576927 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-48gh2" event={"ID":"2f0abbbd-0b22-4bf4-828e-8e3f05035c84","Type":"ContainerStarted","Data":"523fdc6cc6918bc7c8f5b8dcd02c0874a56947ff7580842641c8f07f55db14f8"} Apr 23 17:54:34.578361 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.578337 2566 generic.go:358] "Generic (PLEG): container finished" podID="0eebe585-3752-4ef2-ba49-6f427a3ebdce" containerID="c9a2102b2ea812cae6e7bf5bcf1017ee290d8d326b46db8d823f9b6259b7394a" exitCode=0 Apr 23 17:54:34.578448 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.578404 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" event={"ID":"0eebe585-3752-4ef2-ba49-6f427a3ebdce","Type":"ContainerDied","Data":"c9a2102b2ea812cae6e7bf5bcf1017ee290d8d326b46db8d823f9b6259b7394a"} Apr 23 17:54:34.579651 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.579579 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-msx9j" event={"ID":"88671ae9-14c3-476e-98a0-61200eda94f5","Type":"ContainerStarted","Data":"4042c8b316df412e69921b883d821f9b783a42765c56863c8f4b9009961b24eb"} Apr 23 17:54:34.580912 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.580873 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" event={"ID":"19833190-ba61-4f22-b8f2-00153c34b225","Type":"ContainerStarted","Data":"56f308c2293656237fef50e71f38e9bdd2ae03e036825f996744ef90fb07309d"} Apr 23 17:54:34.583472 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.583447 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" event={"ID":"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076","Type":"ContainerStarted","Data":"207f18cc4a847d795bd456affabbffb80de6f35a32065f9e14f9145111b4ea63"} Apr 23 17:54:34.583554 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.583477 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" event={"ID":"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076","Type":"ContainerStarted","Data":"27c56acd2d652f2395078adbaabff7f3bd778ed6b5cd130e0066ab87b35a0d2e"} Apr 23 17:54:34.583554 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.583492 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" event={"ID":"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076","Type":"ContainerStarted","Data":"ca501708a915f306324f88a440bb5ffd75e9c930edc2674b27fee5ec7882a4bb"} Apr 23 17:54:34.583554 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.583504 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" event={"ID":"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076","Type":"ContainerStarted","Data":"7156159aa09269a1453516fbe0ced8022e44db15f9c2712db5c4741cfeb88a34"} Apr 23 17:54:34.583554 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.583516 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" event={"ID":"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076","Type":"ContainerStarted","Data":"93a37bdbab9559429350212c849186f21a305f5d07ef1b256905076645cad388"} Apr 23 17:54:34.583554 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.583528 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" event={"ID":"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076","Type":"ContainerStarted","Data":"af87bb88df23c2dc422a0bcb3c476bdf53a5885430e6865512896b7c1a4348e2"} Apr 23 17:54:34.584786 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.584768 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d4hwd" event={"ID":"bd91136a-6313-4cae-bd06-a32a9ec8e0cb","Type":"ContainerStarted","Data":"8eee6678f25ba9fc1786150ee7795230b323e4f7614e9ab6a44848744e52fa9a"} Apr 23 17:54:34.586438 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.586404 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-kjt2w" event={"ID":"8f79dd76-5ae2-47b7-bd62-86d231ac80ff","Type":"ContainerStarted","Data":"e7e811a4238ad8692a180a7f6c96a6178e88b98bfe5a78fb9577806b5fafbf46"} Apr 23 17:54:34.587565 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.587546 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" event={"ID":"bbd132ba-580f-4003-8b35-f82ad6b7ccf0","Type":"ContainerStarted","Data":"3118bf0dfae6a89227773a958aa26c575622c982902647bd4611e4e90dde07c6"} Apr 23 17:54:34.594707 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.594661 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-48gh2" podStartSLOduration=5.581546888 podStartE2EDuration="22.594647012s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:54:16.469506882 +0000 UTC m=+115.764785456" lastFinishedPulling="2026-04-23 17:54:33.482606983 +0000 UTC m=+132.777885580" observedRunningTime="2026-04-23 17:54:34.593969734 +0000 UTC m=+133.889248332" watchObservedRunningTime="2026-04-23 17:54:34.594647012 +0000 UTC m=+133.889925610" Apr 23 17:54:34.609587 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.609531 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-msx9j" podStartSLOduration=5.669788519 podStartE2EDuration="22.609514121s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:54:16.497327186 +0000 UTC m=+115.792605762" lastFinishedPulling="2026-04-23 17:54:33.437052784 +0000 UTC m=+132.732331364" observedRunningTime="2026-04-23 17:54:34.608578237 +0000 UTC m=+133.903856845" watchObservedRunningTime="2026-04-23 17:54:34.609514121 +0000 UTC m=+133.904792718" Apr 23 17:54:34.619280 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.619235 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-kjt2w" podStartSLOduration=10.354353008 podStartE2EDuration="22.619223074s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:54:16.503196277 +0000 UTC m=+115.798474852" lastFinishedPulling="2026-04-23 17:54:28.768066342 +0000 UTC m=+128.063344918" observedRunningTime="2026-04-23 17:54:34.619217998 +0000 UTC m=+133.914496792" watchObservedRunningTime="2026-04-23 17:54:34.619223074 +0000 UTC m=+133.914501706" Apr 23 17:54:34.632266 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.632219 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-6b8hr" podStartSLOduration=5.65923731 podStartE2EDuration="22.632206328s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:54:16.494489395 +0000 UTC m=+115.789767977" lastFinishedPulling="2026-04-23 17:54:33.467458418 +0000 UTC m=+132.762736995" observedRunningTime="2026-04-23 17:54:34.631679086 +0000 UTC m=+133.926957682" watchObservedRunningTime="2026-04-23 17:54:34.632206328 +0000 UTC m=+133.927484954" Apr 23 17:54:34.645563 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:34.645519 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-d4hwd" podStartSLOduration=5.692620126 podStartE2EDuration="22.645507818s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:54:16.510832698 +0000 UTC m=+115.806111272" lastFinishedPulling="2026-04-23 17:54:33.463720389 +0000 UTC m=+132.758998964" observedRunningTime="2026-04-23 17:54:34.645225288 +0000 UTC m=+133.940503886" watchObservedRunningTime="2026-04-23 17:54:34.645507818 +0000 UTC m=+133.940786415" Apr 23 17:54:34.686393 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:34.686364 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" podUID="19833190-ba61-4f22-b8f2-00153c34b225" Apr 23 17:54:35.591375 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:35.591333 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" event={"ID":"19833190-ba61-4f22-b8f2-00153c34b225","Type":"ContainerStarted","Data":"31eb784218cba635b7ae155b65f8750276fcf43d0f45ca1b6b3f1ae9a4ca0e6a"} Apr 23 17:54:35.592830 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:35.592804 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-slvjl" event={"ID":"a8dcfc70-4d8f-4caa-a6df-98b824d34a78","Type":"ContainerStarted","Data":"d9cdbf219266d97ebd82689ffbb1e8df740cb77182c2c7d8681522bce60cae17"} Apr 23 17:54:36.188423 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:36.188206 2566 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:36.249599 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.249572 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:36.249711 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.249572 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:36.249751 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:36.249713 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:36.249800 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:36.249775 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:36.249800 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.249572 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:36.249876 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:36.249866 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:36.494461 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.494345 2566 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:36.495038 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.495013 2566 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:36.509102 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.509051 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-slvjl" podStartSLOduration=7.520225933 podStartE2EDuration="24.509034333s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:54:16.475806879 +0000 UTC m=+115.771085455" lastFinishedPulling="2026-04-23 17:54:33.464615275 +0000 UTC m=+132.759893855" observedRunningTime="2026-04-23 17:54:35.617244538 +0000 UTC m=+134.912523138" watchObservedRunningTime="2026-04-23 17:54:36.509034333 +0000 UTC m=+135.804312931" Apr 23 17:54:36.597842 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.597807 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" event={"ID":"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076","Type":"ContainerStarted","Data":"fc355279073177c7e45afe81a6298030f5279576f5be32fb1f3d44cf8ebf80d3"} Apr 23 17:54:36.600259 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.600217 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" event={"ID":"19833190-ba61-4f22-b8f2-00153c34b225","Type":"ContainerStarted","Data":"077eec9ab8e719e770877f855323658793e23c166e18bad775485e7265a56536"} Apr 23 17:54:36.600501 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.600482 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:36.600949 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.600929 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-kjt2w" Apr 23 17:54:36.615684 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.615638 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-c75rn" podStartSLOduration=6.529376613 podStartE2EDuration="24.615623123s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:54:16.488737779 +0000 UTC m=+115.784016353" lastFinishedPulling="2026-04-23 17:54:34.574984274 +0000 UTC m=+133.870262863" observedRunningTime="2026-04-23 17:54:36.615593454 +0000 UTC m=+135.910872061" watchObservedRunningTime="2026-04-23 17:54:36.615623123 +0000 UTC m=+135.910901763" Apr 23 17:54:36.676977 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:36.676938 2566 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 23 17:54:37.248657 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:37.248543 2566 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-23T17:54:36.676964369Z","UUID":"4ef194bc-6ce2-406d-9572-fe83c1a01695","Handler":null,"Name":"","Endpoint":""} Apr 23 17:54:37.253919 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:37.253882 2566 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 23 17:54:37.253919 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:37.253928 2566 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 23 17:54:38.249298 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:38.249257 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:38.249734 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:38.249412 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:38.249734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:38.249423 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:38.249734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:38.249527 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:38.249734 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:38.249568 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:38.249734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:38.249629 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:38.607255 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:38.607220 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" event={"ID":"ca2e53d1-74cd-4370-b1cd-1bb46d1f5076","Type":"ContainerStarted","Data":"038435d05709b346060eae7b62a0a4819238cd520a540521765bbecc167dfe93"} Apr 23 17:54:38.630930 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:38.630886 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" podStartSLOduration=9.372829392 podStartE2EDuration="26.630873933s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:54:16.481142118 +0000 UTC m=+115.776420694" lastFinishedPulling="2026-04-23 17:54:33.739186652 +0000 UTC m=+133.034465235" observedRunningTime="2026-04-23 17:54:38.629955814 +0000 UTC m=+137.925234410" watchObservedRunningTime="2026-04-23 17:54:38.630873933 +0000 UTC m=+137.926152530" Apr 23 17:54:39.610553 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:39.610275 2566 generic.go:358] "Generic (PLEG): container finished" podID="0eebe585-3752-4ef2-ba49-6f427a3ebdce" containerID="c56164975039474f216433222a12ed4fdba1fcd6398ead99ad52f9969441b06b" exitCode=0 Apr 23 17:54:39.611049 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:39.610361 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" event={"ID":"0eebe585-3752-4ef2-ba49-6f427a3ebdce","Type":"ContainerDied","Data":"c56164975039474f216433222a12ed4fdba1fcd6398ead99ad52f9969441b06b"} Apr 23 17:54:39.611464 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:39.611140 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:39.611464 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:39.611170 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:39.611464 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:39.611187 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:39.625957 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:39.625933 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:39.626191 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:39.626176 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:54:40.249922 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:40.249893 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:40.250138 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:40.250008 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:40.250138 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:40.250015 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:40.250138 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:40.250095 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:40.250297 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:40.250132 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:40.250297 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:40.250255 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:40.468538 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:40.468494 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-jd2kh"] Apr 23 17:54:40.469366 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:40.469336 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-jhvgn"] Apr 23 17:54:40.471011 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:40.470989 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-96rvc"] Apr 23 17:54:40.612721 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:40.612683 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:40.613232 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:40.612815 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:40.613232 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:40.612933 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:40.613232 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:40.612822 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:40.613232 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:40.612989 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:40.613232 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:40.613066 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:41.189240 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:41.189208 2566 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:41.616668 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:41.616633 2566 generic.go:358] "Generic (PLEG): container finished" podID="0eebe585-3752-4ef2-ba49-6f427a3ebdce" containerID="57b22102ce5f168b4dc5c34a0c3facd2c548e25613a8ff091d002cc3c2d90d78" exitCode=0 Apr 23 17:54:41.617367 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:41.616723 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" event={"ID":"0eebe585-3752-4ef2-ba49-6f427a3ebdce","Type":"ContainerDied","Data":"57b22102ce5f168b4dc5c34a0c3facd2c548e25613a8ff091d002cc3c2d90d78"} Apr 23 17:54:42.249559 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:42.249514 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:42.249559 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:42.249538 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:42.249831 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:42.249630 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:42.249831 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:42.249656 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:42.249831 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:42.249727 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:42.249831 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:42.249787 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:42.250148 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:42.250132 2566 scope.go:117] "RemoveContainer" containerID="6a71ba03dee221f9bd2a17cfb6108fb69eb3264565066869351d024e28c61fc0" Apr 23 17:54:42.250352 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:42.250329 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:54:43.623139 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:43.622872 2566 generic.go:358] "Generic (PLEG): container finished" podID="0eebe585-3752-4ef2-ba49-6f427a3ebdce" containerID="4f16f2e5fb4f7c2e4fe9573a567cdd9d31bb9126f6dfd55a1b455cc0d7363278" exitCode=0 Apr 23 17:54:43.623139 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:43.622922 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" event={"ID":"0eebe585-3752-4ef2-ba49-6f427a3ebdce","Type":"ContainerDied","Data":"4f16f2e5fb4f7c2e4fe9573a567cdd9d31bb9126f6dfd55a1b455cc0d7363278"} Apr 23 17:54:44.250078 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:44.250038 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:44.250078 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:44.250063 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:44.250324 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:44.250157 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:44.250324 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:44.250200 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:44.250324 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:44.250287 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:44.250480 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:44.250380 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:46.190196 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:46.190155 2566 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:46.249645 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:46.249616 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:46.249761 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:46.249645 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:46.249761 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:46.249621 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:46.249761 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:46.249745 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:46.249916 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:46.249839 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:46.249964 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:46.249919 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:48.032151 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:48.032112 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:48.032619 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:48.032162 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:48.032619 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.032262 2566 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:48.032619 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.032272 2566 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:48.032619 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.032349 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs podName:ec0108e4-36f5-4959-99b0-8fe6326c7aaa nodeName:}" failed. No retries permitted until 2026-04-23 17:55:20.032328285 +0000 UTC m=+179.327606861 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs") pod "network-metrics-daemon-96rvc" (UID: "ec0108e4-36f5-4959-99b0-8fe6326c7aaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:48.032619 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.032367 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret podName:bf59011d-e01e-49f9-b468-33af8f5a6489 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:20.032360329 +0000 UTC m=+179.327638905 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret") pod "global-pull-secret-syncer-jhvgn" (UID: "bf59011d-e01e-49f9-b468-33af8f5a6489") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:48.132619 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:48.132583 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlspw\" (UniqueName: \"kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw\") pod \"network-check-target-jd2kh\" (UID: \"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc\") " pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:48.132811 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.132771 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:48.132811 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.132801 2566 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:48.132811 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.132813 2566 projected.go:194] Error preparing data for projected volume kube-api-access-qlspw for pod openshift-network-diagnostics/network-check-target-jd2kh: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:48.132949 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.132865 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw podName:2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc nodeName:}" failed. No retries permitted until 2026-04-23 17:55:20.132851238 +0000 UTC m=+179.428129813 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qlspw" (UniqueName: "kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw") pod "network-check-target-jd2kh" (UID: "2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:48.249651 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:48.249611 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:48.249831 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:48.249612 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:48.249831 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.249751 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:48.249831 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:48.249816 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:48.250007 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.249896 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:48.250007 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:48.249977 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:50.249189 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:50.249117 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:50.249189 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:50.249148 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:50.249611 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:50.249228 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jd2kh" podUID="2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc" Apr 23 17:54:50.249611 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:50.249265 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:50.249611 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:50.249355 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96rvc" podUID="ec0108e4-36f5-4959-99b0-8fe6326c7aaa" Apr 23 17:54:50.249611 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:50.249448 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-jhvgn" podUID="bf59011d-e01e-49f9-b468-33af8f5a6489" Apr 23 17:54:50.638551 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:50.638517 2566 generic.go:358] "Generic (PLEG): container finished" podID="0eebe585-3752-4ef2-ba49-6f427a3ebdce" containerID="ae8ebf501d30cdb78c09927245920710c681479f39c63eaa416508a54c67c5c5" exitCode=0 Apr 23 17:54:50.638715 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:50.638584 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" event={"ID":"0eebe585-3752-4ef2-ba49-6f427a3ebdce","Type":"ContainerDied","Data":"ae8ebf501d30cdb78c09927245920710c681479f39c63eaa416508a54c67c5c5"} Apr 23 17:54:51.644979 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:51.644797 2566 generic.go:358] "Generic (PLEG): container finished" podID="0eebe585-3752-4ef2-ba49-6f427a3ebdce" containerID="47cb8da36a9be2475c0f3c31cf001d6c0ed0c7239f3057a0e45d84d7ed64a2ca" exitCode=0 Apr 23 17:54:51.645388 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:51.644880 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" event={"ID":"0eebe585-3752-4ef2-ba49-6f427a3ebdce","Type":"ContainerDied","Data":"47cb8da36a9be2475c0f3c31cf001d6c0ed0c7239f3057a0e45d84d7ed64a2ca"} Apr 23 17:54:52.250067 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.250027 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:54:52.250235 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.250078 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:54:52.250235 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.250085 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:54:52.252743 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.252714 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:54:52.252743 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.252746 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:54:52.252974 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.252762 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-q5nnb\"" Apr 23 17:54:52.252974 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.252768 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:54:52.252974 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.252794 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:54:52.252974 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.252836 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-r6xp2\"" Apr 23 17:54:52.649677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.649639 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" event={"ID":"0eebe585-3752-4ef2-ba49-6f427a3ebdce","Type":"ContainerStarted","Data":"8e1fd43a421edf635bd32649d051edbc63bd011cabea7bbe4897400a9e113103"} Apr 23 17:54:52.706269 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:52.706227 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-b4p7v" podStartSLOduration=7.60701874 podStartE2EDuration="40.706211828s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:54:16.513178758 +0000 UTC m=+115.808457333" lastFinishedPulling="2026-04-23 17:54:49.612371846 +0000 UTC m=+148.907650421" observedRunningTime="2026-04-23 17:54:52.705594698 +0000 UTC m=+152.000873294" watchObservedRunningTime="2026-04-23 17:54:52.706211828 +0000 UTC m=+152.001490427" Apr 23 17:54:53.470944 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.470917 2566 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-172.ec2.internal" event="NodeReady" Apr 23 17:54:53.539439 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.539406 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-7fb885f848-mqdhm"] Apr 23 17:54:53.542088 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.542073 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.544227 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.544206 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 23 17:54:53.544621 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.544600 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 23 17:54:53.544804 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.544663 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-l8dx7\"" Apr 23 17:54:53.545010 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.544996 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 23 17:54:53.550326 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.550281 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 23 17:54:53.563829 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.563801 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7fb885f848-mqdhm"] Apr 23 17:54:53.570830 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.570795 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ptrbw"] Apr 23 17:54:53.576945 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.576915 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl"] Apr 23 17:54:53.577121 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.577066 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.579993 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.579972 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-54ff9bfc64-gddsn"] Apr 23 17:54:53.580134 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.580118 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:53.581943 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.581919 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 23 17:54:53.582091 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.581945 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-jll6l\"" Apr 23 17:54:53.582091 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.581984 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-dockercfg-nhbht\"" Apr 23 17:54:53.582091 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.581985 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 23 17:54:53.582448 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.582428 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-tls\"" Apr 23 17:54:53.582548 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.582511 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 23 17:54:53.582861 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.582846 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz"] Apr 23 17:54:53.582984 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.582969 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.584095 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.583727 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 23 17:54:53.584095 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.583902 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemetry-config\"" Apr 23 17:54:53.585793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.585776 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Apr 23 17:54:53.586013 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.585997 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld"] Apr 23 17:54:53.586146 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.586129 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz" Apr 23 17:54:53.586202 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.586191 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Apr 23 17:54:53.586344 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.586323 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"default-ingress-cert\"" Apr 23 17:54:53.586621 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.586585 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Apr 23 17:54:53.586693 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.586585 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-fd7p8\"" Apr 23 17:54:53.586873 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.586861 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Apr 23 17:54:53.586924 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.586911 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Apr 23 17:54:53.588366 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.588349 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:53.588441 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.588350 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-storage-operator\"/\"volume-data-source-validator-dockercfg-tq6vv\"" Apr 23 17:54:53.588953 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.588936 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:53.589034 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.588970 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m"] Apr 23 17:54:53.589132 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.589115 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:54:53.591794 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.591773 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv"] Apr 23 17:54:53.591916 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.591901 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.593586 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.593569 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Apr 23 17:54:53.594216 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.594196 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Apr 23 17:54:53.594342 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.594231 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:53.594402 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.594362 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:53.594678 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.594656 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:53.594764 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.594699 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:53.594764 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.594713 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Apr 23 17:54:53.594855 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.594769 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-5kpdc"] Apr 23 17:54:53.594855 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.594790 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-hk57j\"" Apr 23 17:54:53.594960 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.594951 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:54:53.595793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.595776 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xx6v9\"" Apr 23 17:54:53.597259 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.597060 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Apr 23 17:54:53.597658 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.597625 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Apr 23 17:54:53.597970 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.597949 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-r4kqt\"" Apr 23 17:54:53.597970 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.597957 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp"] Apr 23 17:54:53.598108 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.598049 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.599804 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.599782 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Apr 23 17:54:53.599967 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.599940 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:53.600161 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.600141 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Apr 23 17:54:53.600161 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.600151 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4qqkh\"" Apr 23 17:54:53.600531 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.600516 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:53.601278 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.601260 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-gm4kb"] Apr 23 17:54:53.601419 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.601405 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp" Apr 23 17:54:53.602975 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.602956 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"network-diagnostics-dockercfg-jst7f\"" Apr 23 17:54:53.604476 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.604446 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-585dfdc468-kfcjl"] Apr 23 17:54:53.604623 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.604604 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:54:53.606567 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.606545 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 23 17:54:53.606794 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.606777 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-xzdf6\"" Apr 23 17:54:53.606882 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.606778 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 23 17:54:53.606937 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.606780 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 23 17:54:53.607077 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.607061 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Apr 23 17:54:53.607702 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.607682 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm"] Apr 23 17:54:53.607841 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.607824 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.609414 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.609397 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"operator-dockercfg-cmn6c\"" Apr 23 17:54:53.609660 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.609646 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"service-ca-bundle\"" Apr 23 17:54:53.609713 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.609688 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 23 17:54:53.610375 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.610320 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"openshift-insights-serving-cert\"" Apr 23 17:54:53.610890 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.610754 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 23 17:54:53.611606 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.611579 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld"] Apr 23 17:54:53.611709 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.611619 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ptrbw"] Apr 23 17:54:53.611763 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.611749 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:53.615193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.615172 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:53.615798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.615776 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Apr 23 17:54:53.616436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.616411 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m"] Apr 23 17:54:53.618886 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.618849 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl"] Apr 23 17:54:53.619023 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.619008 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Apr 23 17:54:53.619231 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.619216 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:53.619946 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.619927 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-5fz29\"" Apr 23 17:54:53.621085 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.621062 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"trusted-ca-bundle\"" Apr 23 17:54:53.622029 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.622008 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv"] Apr 23 17:54:53.628649 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.628606 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-5kpdc"] Apr 23 17:54:53.629420 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.629338 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp"] Apr 23 17:54:53.629909 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.629885 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-54ff9bfc64-gddsn"] Apr 23 17:54:53.630702 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.630684 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm"] Apr 23 17:54:53.631590 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.631572 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gm4kb"] Apr 23 17:54:53.636276 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.636253 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz"] Apr 23 17:54:53.644958 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.644939 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-585dfdc468-kfcjl"] Apr 23 17:54:53.667135 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667116 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fc849c85-296b-4ebd-9bd4-27f9edfd3785-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:54:53.667436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667141 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-ca-trust-extracted\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.667436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667160 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-trusted-ca\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.667436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667179 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gww94\" (UniqueName: \"kubernetes.io/projected/549ece9d-4598-441f-a940-cecc154fbf7e-kube-api-access-gww94\") pod \"volume-data-source-validator-7c6cbb6c87-g4wkz\" (UID: \"549ece9d-4598-441f-a940-cecc154fbf7e\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz" Apr 23 17:54:53.667436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667220 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-bound-sa-token\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.667436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667265 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxmmj\" (UniqueName: \"kubernetes.io/projected/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-kube-api-access-wxmmj\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.667436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667326 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.667436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667363 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-serving-cert\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.667436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667387 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-certificates\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.667436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667413 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-tmp\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.667436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667437 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-image-registry-private-configuration\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667460 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667493 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667550 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9cxx\" (UniqueName: \"kubernetes.io/projected/df076eb4-c3f3-4cbf-8cee-a735d1572b5b-kube-api-access-d9cxx\") pod \"kube-storage-version-migrator-operator-6769c5d45-bfm5m\" (UID: \"df076eb4-c3f3-4cbf-8cee-a735d1572b5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667583 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwm4w\" (UniqueName: \"kubernetes.io/projected/790bfe6f-76d8-43c6-a545-a921f86e66cd-kube-api-access-kwm4w\") pod \"network-check-source-8894fc9bd-pchtp\" (UID: \"790bfe6f-76d8-43c6-a545-a921f86e66cd\") " pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667617 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36169332-5c35-4e99-b318-65e24dfcc370-serving-cert\") pod \"service-ca-operator-d6fc45fc5-fqsnm\" (UID: \"36169332-5c35-4e99-b318-65e24dfcc370\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667677 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1992d43a-7589-4ec9-b815-8a2c284b237c-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667706 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp7ml\" (UniqueName: \"kubernetes.io/projected/b53d5724-71d1-441d-9546-a103c6736771-kube-api-access-gp7ml\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667733 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667770 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-config\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667795 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df076eb4-c3f3-4cbf-8cee-a735d1572b5b-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-bfm5m\" (UID: \"df076eb4-c3f3-4cbf-8cee-a735d1572b5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667822 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df076eb4-c3f3-4cbf-8cee-a735d1572b5b-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-bfm5m\" (UID: \"df076eb4-c3f3-4cbf-8cee-a735d1572b5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667840 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/158cc267-e1dc-48e1-90d2-dba2495a9735-tmp-dir\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.667869 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667855 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9crj\" (UniqueName: \"kubernetes.io/projected/158cc267-e1dc-48e1-90d2-dba2495a9735-kube-api-access-g9crj\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667884 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-default-certificate\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667929 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-service-ca-bundle\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667958 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-trusted-ca\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.667989 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/158cc267-e1dc-48e1-90d2-dba2495a9735-config-volume\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668015 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668051 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-snapshots\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668074 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-serving-cert\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668119 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668143 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7bnt\" (UniqueName: \"kubernetes.io/projected/72010597-3b11-4326-ad5d-3af1af12b593-kube-api-access-d7bnt\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668165 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668200 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-stats-auth\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668222 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmvcz\" (UniqueName: \"kubernetes.io/projected/274b9ba8-597e-49dd-9ba0-e1243dc7b259-kube-api-access-zmvcz\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668248 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxtw\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-kube-api-access-vxxtw\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668267 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-installation-pull-secrets\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.668401 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668286 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rct5m\" (UniqueName: \"kubernetes.io/projected/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-kube-api-access-rct5m\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.668859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668339 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4vx9\" (UniqueName: \"kubernetes.io/projected/36169332-5c35-4e99-b318-65e24dfcc370-kube-api-access-w4vx9\") pod \"service-ca-operator-d6fc45fc5-fqsnm\" (UID: \"36169332-5c35-4e99-b318-65e24dfcc370\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:53.668859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668375 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvsdn\" (UniqueName: \"kubernetes.io/projected/1992d43a-7589-4ec9-b815-8a2c284b237c-kube-api-access-kvsdn\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:53.668859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668396 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.668859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668431 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36169332-5c35-4e99-b318-65e24dfcc370-config\") pod \"service-ca-operator-d6fc45fc5-fqsnm\" (UID: \"36169332-5c35-4e99-b318-65e24dfcc370\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:53.668859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.668463 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.769134 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769046 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-stats-auth\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.769134 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769090 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zmvcz\" (UniqueName: \"kubernetes.io/projected/274b9ba8-597e-49dd-9ba0-e1243dc7b259-kube-api-access-zmvcz\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.769134 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769117 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxtw\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-kube-api-access-vxxtw\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.769407 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769240 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-installation-pull-secrets\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.769407 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769281 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rct5m\" (UniqueName: \"kubernetes.io/projected/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-kube-api-access-rct5m\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.769504 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769442 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4vx9\" (UniqueName: \"kubernetes.io/projected/36169332-5c35-4e99-b318-65e24dfcc370-kube-api-access-w4vx9\") pod \"service-ca-operator-d6fc45fc5-fqsnm\" (UID: \"36169332-5c35-4e99-b318-65e24dfcc370\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:53.769504 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769488 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kvsdn\" (UniqueName: \"kubernetes.io/projected/1992d43a-7589-4ec9-b815-8a2c284b237c-kube-api-access-kvsdn\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:53.769601 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769516 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.769601 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769566 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36169332-5c35-4e99-b318-65e24dfcc370-config\") pod \"service-ca-operator-d6fc45fc5-fqsnm\" (UID: \"36169332-5c35-4e99-b318-65e24dfcc370\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:53.769683 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769598 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.769683 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769634 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fc849c85-296b-4ebd-9bd4-27f9edfd3785-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:54:53.769683 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769660 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-ca-trust-extracted\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.769813 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769692 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-trusted-ca\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.769813 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.769701 2566 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:54:53.769813 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769717 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gww94\" (UniqueName: \"kubernetes.io/projected/549ece9d-4598-441f-a940-cecc154fbf7e-kube-api-access-gww94\") pod \"volume-data-source-validator-7c6cbb6c87-g4wkz\" (UID: \"549ece9d-4598-441f-a940-cecc154fbf7e\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz" Apr 23 17:54:53.769813 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769741 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-bound-sa-token\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.769813 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.769756 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls podName:158cc267-e1dc-48e1-90d2-dba2495a9735 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:54.269736995 +0000 UTC m=+153.565015572 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls") pod "dns-default-ptrbw" (UID: "158cc267-e1dc-48e1-90d2-dba2495a9735") : secret "dns-default-metrics-tls" not found Apr 23 17:54:53.769813 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769799 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wxmmj\" (UniqueName: \"kubernetes.io/projected/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-kube-api-access-wxmmj\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.770076 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769828 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.770076 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769853 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-serving-cert\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.770076 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769878 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-certificates\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.770076 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769905 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-tmp\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.770076 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769935 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-image-registry-private-configuration\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.770076 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769964 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:54:53.770076 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.769994 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:54:53.770076 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770022 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9cxx\" (UniqueName: \"kubernetes.io/projected/df076eb4-c3f3-4cbf-8cee-a735d1572b5b-kube-api-access-d9cxx\") pod \"kube-storage-version-migrator-operator-6769c5d45-bfm5m\" (UID: \"df076eb4-c3f3-4cbf-8cee-a735d1572b5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.770076 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770048 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kwm4w\" (UniqueName: \"kubernetes.io/projected/790bfe6f-76d8-43c6-a545-a921f86e66cd-kube-api-access-kwm4w\") pod \"network-check-source-8894fc9bd-pchtp\" (UID: \"790bfe6f-76d8-43c6-a545-a921f86e66cd\") " pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp" Apr 23 17:54:53.770076 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770077 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36169332-5c35-4e99-b318-65e24dfcc370-serving-cert\") pod \"service-ca-operator-d6fc45fc5-fqsnm\" (UID: \"36169332-5c35-4e99-b318-65e24dfcc370\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770110 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36169332-5c35-4e99-b318-65e24dfcc370-config\") pod \"service-ca-operator-d6fc45fc5-fqsnm\" (UID: \"36169332-5c35-4e99-b318-65e24dfcc370\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770119 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1992d43a-7589-4ec9-b815-8a2c284b237c-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770150 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gp7ml\" (UniqueName: \"kubernetes.io/projected/b53d5724-71d1-441d-9546-a103c6736771-kube-api-access-gp7ml\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770198 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770233 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-config\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770260 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df076eb4-c3f3-4cbf-8cee-a735d1572b5b-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-bfm5m\" (UID: \"df076eb4-c3f3-4cbf-8cee-a735d1572b5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770288 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df076eb4-c3f3-4cbf-8cee-a735d1572b5b-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-bfm5m\" (UID: \"df076eb4-c3f3-4cbf-8cee-a735d1572b5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770334 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/158cc267-e1dc-48e1-90d2-dba2495a9735-tmp-dir\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770378 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g9crj\" (UniqueName: \"kubernetes.io/projected/158cc267-e1dc-48e1-90d2-dba2495a9735-kube-api-access-g9crj\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770407 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-default-certificate\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770432 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-service-ca-bundle\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770459 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-trusted-ca\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.770497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770496 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/158cc267-e1dc-48e1-90d2-dba2495a9735-config-volume\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770526 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.770580 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:54.270539188 +0000 UTC m=+153.565817795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : configmap references non-existent config key: service-ca.crt Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.770605 2566 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.770650 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls podName:1992d43a-7589-4ec9-b815-8a2c284b237c nodeName:}" failed. No retries permitted until 2026-04-23 17:54:54.270635 +0000 UTC m=+153.565913580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-bl4fl" (UID: "1992d43a-7589-4ec9-b815-8a2c284b237c") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770623 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-snapshots\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770696 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-serving-cert\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770737 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fc849c85-296b-4ebd-9bd4-27f9edfd3785-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770786 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770823 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7bnt\" (UniqueName: \"kubernetes.io/projected/72010597-3b11-4326-ad5d-3af1af12b593-kube-api-access-d7bnt\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.770869 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.770926 2566 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.770940 2566 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fb885f848-mqdhm: secret "image-registry-tls" not found Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.770983 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls podName:a7cbc07c-c629-4c31-a456-4f9bf5b328f7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:54.270973562 +0000 UTC m=+153.566252137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls") pod "image-registry-7fb885f848-mqdhm" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7") : secret "image-registry-tls" not found Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.771247 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-tmp\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.771781 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.771329 2566 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.771369 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:54.27135701 +0000 UTC m=+153.566635590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : secret "router-metrics-certs-default" not found Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.771660 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-snapshots\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.772253 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-ca-trust-extracted\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.772280 2566 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.772679 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert podName:fc849c85-296b-4ebd-9bd4-27f9edfd3785 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:54.27266215 +0000 UTC m=+153.567940737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-jwhmv" (UID: "fc849c85-296b-4ebd-9bd4-27f9edfd3785") : secret "networking-console-plugin-cert" not found Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.772812 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-config\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.772836 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-trusted-ca\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.772874 2566 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.772916 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls podName:b53d5724-71d1-441d-9546-a103c6736771 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:54.272903236 +0000 UTC m=+153.568181811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6dnld" (UID: "b53d5724-71d1-441d-9546-a103c6736771") : secret "samples-operator-tls" not found Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.773389 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1992d43a-7589-4ec9-b815-8a2c284b237c-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.773397 2566 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:53.773477 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert podName:72010597-3b11-4326-ad5d-3af1af12b593 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:54.273463283 +0000 UTC m=+153.568741869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert") pod "ingress-canary-gm4kb" (UID: "72010597-3b11-4326-ad5d-3af1af12b593") : secret "canary-serving-cert" not found Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.774187 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/158cc267-e1dc-48e1-90d2-dba2495a9735-tmp-dir\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.774784 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.774381 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-service-ca-bundle\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.775700 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.774949 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.775700 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.774955 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df076eb4-c3f3-4cbf-8cee-a735d1572b5b-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-bfm5m\" (UID: \"df076eb4-c3f3-4cbf-8cee-a735d1572b5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.775700 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.775409 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-certificates\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.775700 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.775433 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-serving-cert\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.775700 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.775600 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-installation-pull-secrets\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.775970 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.775889 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-stats-auth\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.775970 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.775908 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/158cc267-e1dc-48e1-90d2-dba2495a9735-config-volume\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.775970 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.775890 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-image-registry-private-configuration\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.776253 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.776227 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-trusted-ca\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.776603 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.776583 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-serving-cert\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.777240 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.777219 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df076eb4-c3f3-4cbf-8cee-a735d1572b5b-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-bfm5m\" (UID: \"df076eb4-c3f3-4cbf-8cee-a735d1572b5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.777913 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.777888 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36169332-5c35-4e99-b318-65e24dfcc370-serving-cert\") pod \"service-ca-operator-d6fc45fc5-fqsnm\" (UID: \"36169332-5c35-4e99-b318-65e24dfcc370\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:53.778032 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.778015 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-default-certificate\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.781270 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.781238 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4vx9\" (UniqueName: \"kubernetes.io/projected/36169332-5c35-4e99-b318-65e24dfcc370-kube-api-access-w4vx9\") pod \"service-ca-operator-d6fc45fc5-fqsnm\" (UID: \"36169332-5c35-4e99-b318-65e24dfcc370\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:53.782143 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.782069 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rct5m\" (UniqueName: \"kubernetes.io/projected/9334253b-6eff-4ad7-9cc7-5d96bdb994ad-kube-api-access-rct5m\") pod \"console-operator-9d4b6777b-5kpdc\" (UID: \"9334253b-6eff-4ad7-9cc7-5d96bdb994ad\") " pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.782778 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.782738 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxtw\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-kube-api-access-vxxtw\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.783578 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.783538 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-bound-sa-token\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:53.784478 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.784433 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwm4w\" (UniqueName: \"kubernetes.io/projected/790bfe6f-76d8-43c6-a545-a921f86e66cd-kube-api-access-kwm4w\") pod \"network-check-source-8894fc9bd-pchtp\" (UID: \"790bfe6f-76d8-43c6-a545-a921f86e66cd\") " pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp" Apr 23 17:54:53.784576 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.784514 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gww94\" (UniqueName: \"kubernetes.io/projected/549ece9d-4598-441f-a940-cecc154fbf7e-kube-api-access-gww94\") pod \"volume-data-source-validator-7c6cbb6c87-g4wkz\" (UID: \"549ece9d-4598-441f-a940-cecc154fbf7e\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz" Apr 23 17:54:53.784965 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.784941 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9cxx\" (UniqueName: \"kubernetes.io/projected/df076eb4-c3f3-4cbf-8cee-a735d1572b5b-kube-api-access-d9cxx\") pod \"kube-storage-version-migrator-operator-6769c5d45-bfm5m\" (UID: \"df076eb4-c3f3-4cbf-8cee-a735d1572b5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.785956 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.785890 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmvcz\" (UniqueName: \"kubernetes.io/projected/274b9ba8-597e-49dd-9ba0-e1243dc7b259-kube-api-access-zmvcz\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:53.785956 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.785914 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxmmj\" (UniqueName: \"kubernetes.io/projected/dd76c0f6-b46d-43a0-a71f-55a695fd6d99-kube-api-access-wxmmj\") pod \"insights-operator-585dfdc468-kfcjl\" (UID: \"dd76c0f6-b46d-43a0-a71f-55a695fd6d99\") " pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.786152 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.786119 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvsdn\" (UniqueName: \"kubernetes.io/projected/1992d43a-7589-4ec9-b815-8a2c284b237c-kube-api-access-kvsdn\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:53.786609 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.786593 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7bnt\" (UniqueName: \"kubernetes.io/projected/72010597-3b11-4326-ad5d-3af1af12b593-kube-api-access-d7bnt\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:54:53.786967 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.786943 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9crj\" (UniqueName: \"kubernetes.io/projected/158cc267-e1dc-48e1-90d2-dba2495a9735-kube-api-access-g9crj\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:53.787250 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.787235 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp7ml\" (UniqueName: \"kubernetes.io/projected/b53d5724-71d1-441d-9546-a103c6736771-kube-api-access-gp7ml\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:54:53.909508 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.909477 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz" Apr 23 17:54:53.930319 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.930269 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" Apr 23 17:54:53.943212 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.943186 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:54:53.949947 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.949921 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp" Apr 23 17:54:53.962611 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.962577 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-585dfdc468-kfcjl" Apr 23 17:54:53.966223 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:53.966153 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" Apr 23 17:54:54.120714 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.120656 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz"] Apr 23 17:54:54.124702 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:54.124668 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod549ece9d_4598_441f_a940_cecc154fbf7e.slice/crio-e463b5fa4e5484059b5896d391abea10c1ae3bef41b8517ee7fe7438038bf05c WatchSource:0}: Error finding container e463b5fa4e5484059b5896d391abea10c1ae3bef41b8517ee7fe7438038bf05c: Status 404 returned error can't find the container with id e463b5fa4e5484059b5896d391abea10c1ae3bef41b8517ee7fe7438038bf05c Apr 23 17:54:54.146425 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.146398 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m"] Apr 23 17:54:54.153903 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:54.153875 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf076eb4_c3f3_4cbf_8cee_a735d1572b5b.slice/crio-497513c61b49f31f8b4ef29fc29fc76581c094307cd69b8178830898e48a33fe WatchSource:0}: Error finding container 497513c61b49f31f8b4ef29fc29fc76581c094307cd69b8178830898e48a33fe: Status 404 returned error can't find the container with id 497513c61b49f31f8b4ef29fc29fc76581c094307cd69b8178830898e48a33fe Apr 23 17:54:54.171781 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.171750 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp"] Apr 23 17:54:54.175823 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:54.175794 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod790bfe6f_76d8_43c6_a545_a921f86e66cd.slice/crio-ffa7b6b99a1afc19ce2206cd1aac0e8cd721043241d823958f951d5540c000c1 WatchSource:0}: Error finding container ffa7b6b99a1afc19ce2206cd1aac0e8cd721043241d823958f951d5540c000c1: Status 404 returned error can't find the container with id ffa7b6b99a1afc19ce2206cd1aac0e8cd721043241d823958f951d5540c000c1 Apr 23 17:54:54.178001 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.177975 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-5kpdc"] Apr 23 17:54:54.182079 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:54.182055 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9334253b_6eff_4ad7_9cc7_5d96bdb994ad.slice/crio-aafb39e62bdeae531688bd8bf4535837ca5e7dc8ac581e0ab49504d745cff89f WatchSource:0}: Error finding container aafb39e62bdeae531688bd8bf4535837ca5e7dc8ac581e0ab49504d745cff89f: Status 404 returned error can't find the container with id aafb39e62bdeae531688bd8bf4535837ca5e7dc8ac581e0ab49504d745cff89f Apr 23 17:54:54.209582 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.209558 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm"] Apr 23 17:54:54.210601 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.210582 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-585dfdc468-kfcjl"] Apr 23 17:54:54.211841 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:54.211813 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36169332_5c35_4e99_b318_65e24dfcc370.slice/crio-ad007a530feb2f76d32d5febb646f9be5b70028054ceb07bc68ac2f20a1b264d WatchSource:0}: Error finding container ad007a530feb2f76d32d5febb646f9be5b70028054ceb07bc68ac2f20a1b264d: Status 404 returned error can't find the container with id ad007a530feb2f76d32d5febb646f9be5b70028054ceb07bc68ac2f20a1b264d Apr 23 17:54:54.212371 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:54:54.212350 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd76c0f6_b46d_43a0_a71f_55a695fd6d99.slice/crio-9d6faffb5432586a9aa3d775a22e6ec3832b08c416092fa9707586c843560d18 WatchSource:0}: Error finding container 9d6faffb5432586a9aa3d775a22e6ec3832b08c416092fa9707586c843560d18: Status 404 returned error can't find the container with id 9d6faffb5432586a9aa3d775a22e6ec3832b08c416092fa9707586c843560d18 Apr 23 17:54:54.277527 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.277440 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:54:54.277527 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.277491 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.277533 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.277561 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.277585 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277608 2566 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277670 2566 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277683 2566 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277688 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert podName:fc849c85-296b-4ebd-9bd4-27f9edfd3785 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:55.277668049 +0000 UTC m=+154.572946641 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-jwhmv" (UID: "fc849c85-296b-4ebd-9bd4-27f9edfd3785") : secret "networking-console-plugin-cert" not found Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277674 2566 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277704 2566 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277724 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:55.277708542 +0000 UTC m=+154.572987118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : secret "router-metrics-certs-default" not found Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.277710 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277741 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert podName:72010597-3b11-4326-ad5d-3af1af12b593 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:55.27773323 +0000 UTC m=+154.573011805 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert") pod "ingress-canary-gm4kb" (UID: "72010597-3b11-4326-ad5d-3af1af12b593") : secret "canary-serving-cert" not found Apr 23 17:54:54.277734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277726 2566 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fb885f848-mqdhm: secret "image-registry-tls" not found Apr 23 17:54:54.278207 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277751 2566 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:54:54.278207 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277757 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls podName:1992d43a-7589-4ec9-b815-8a2c284b237c nodeName:}" failed. No retries permitted until 2026-04-23 17:54:55.277750362 +0000 UTC m=+154.573028937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-bl4fl" (UID: "1992d43a-7589-4ec9-b815-8a2c284b237c") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:54.278207 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277787 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls podName:158cc267-e1dc-48e1-90d2-dba2495a9735 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:55.277774073 +0000 UTC m=+154.573052649 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls") pod "dns-default-ptrbw" (UID: "158cc267-e1dc-48e1-90d2-dba2495a9735") : secret "dns-default-metrics-tls" not found Apr 23 17:54:54.278207 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277806 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls podName:a7cbc07c-c629-4c31-a456-4f9bf5b328f7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:55.277800777 +0000 UTC m=+154.573079351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls") pod "image-registry-7fb885f848-mqdhm" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7") : secret "image-registry-tls" not found Apr 23 17:54:54.278207 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.277826 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:54.278207 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.277855 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:54:54.278207 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277930 2566 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:54:54.278207 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277966 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls podName:b53d5724-71d1-441d-9546-a103c6736771 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:55.277957236 +0000 UTC m=+154.573235810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6dnld" (UID: "b53d5724-71d1-441d-9546-a103c6736771") : secret "samples-operator-tls" not found Apr 23 17:54:54.278207 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:54.277980 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:55.277973447 +0000 UTC m=+154.573252023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : configmap references non-existent config key: service-ca.crt Apr 23 17:54:54.657065 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.656989 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-kfcjl" event={"ID":"dd76c0f6-b46d-43a0-a71f-55a695fd6d99","Type":"ContainerStarted","Data":"9d6faffb5432586a9aa3d775a22e6ec3832b08c416092fa9707586c843560d18"} Apr 23 17:54:54.659229 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.659165 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" event={"ID":"9334253b-6eff-4ad7-9cc7-5d96bdb994ad","Type":"ContainerStarted","Data":"aafb39e62bdeae531688bd8bf4535837ca5e7dc8ac581e0ab49504d745cff89f"} Apr 23 17:54:54.661802 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.661735 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz" event={"ID":"549ece9d-4598-441f-a940-cecc154fbf7e","Type":"ContainerStarted","Data":"e463b5fa4e5484059b5896d391abea10c1ae3bef41b8517ee7fe7438038bf05c"} Apr 23 17:54:54.663240 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.663213 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" event={"ID":"36169332-5c35-4e99-b318-65e24dfcc370","Type":"ContainerStarted","Data":"ad007a530feb2f76d32d5febb646f9be5b70028054ceb07bc68ac2f20a1b264d"} Apr 23 17:54:54.665062 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.664999 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" event={"ID":"df076eb4-c3f3-4cbf-8cee-a735d1572b5b","Type":"ContainerStarted","Data":"497513c61b49f31f8b4ef29fc29fc76581c094307cd69b8178830898e48a33fe"} Apr 23 17:54:54.667678 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:54.667655 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp" event={"ID":"790bfe6f-76d8-43c6-a545-a921f86e66cd","Type":"ContainerStarted","Data":"ffa7b6b99a1afc19ce2206cd1aac0e8cd721043241d823958f951d5540c000c1"} Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:55.289024 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:55.289072 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:55.289137 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:55.289176 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:55.289207 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:55.289236 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:55.289287 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289497 2566 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289518 2566 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289557 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.289538485 +0000 UTC m=+156.584817066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : secret "router-metrics-certs-default" not found Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289581 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls podName:b53d5724-71d1-441d-9546-a103c6736771 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.289563693 +0000 UTC m=+156.584842275 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6dnld" (UID: "b53d5724-71d1-441d-9546-a103c6736771") : secret "samples-operator-tls" not found Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289653 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.289642257 +0000 UTC m=+156.584920838 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : configmap references non-existent config key: service-ca.crt Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289707 2566 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289739 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls podName:158cc267-e1dc-48e1-90d2-dba2495a9735 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.289729325 +0000 UTC m=+156.585007906 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls") pod "dns-default-ptrbw" (UID: "158cc267-e1dc-48e1-90d2-dba2495a9735") : secret "dns-default-metrics-tls" not found Apr 23 17:54:55.289891 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289792 2566 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:54:55.290805 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289820 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert podName:72010597-3b11-4326-ad5d-3af1af12b593 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.289809611 +0000 UTC m=+156.585088186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert") pod "ingress-canary-gm4kb" (UID: "72010597-3b11-4326-ad5d-3af1af12b593") : secret "canary-serving-cert" not found Apr 23 17:54:55.290805 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289868 2566 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:54:55.290805 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.289897 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert podName:fc849c85-296b-4ebd-9bd4-27f9edfd3785 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.289888004 +0000 UTC m=+156.585166582 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-jwhmv" (UID: "fc849c85-296b-4ebd-9bd4-27f9edfd3785") : secret "networking-console-plugin-cert" not found Apr 23 17:54:55.290805 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.290056 2566 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:54:55.290805 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.290072 2566 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fb885f848-mqdhm: secret "image-registry-tls" not found Apr 23 17:54:55.290805 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.290111 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls podName:a7cbc07c-c629-4c31-a456-4f9bf5b328f7 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.290098776 +0000 UTC m=+156.585377558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls") pod "image-registry-7fb885f848-mqdhm" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7") : secret "image-registry-tls" not found Apr 23 17:54:55.291410 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:55.291223 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:55.291410 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.291344 2566 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:55.291410 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:55.291388 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls podName:1992d43a-7589-4ec9-b815-8a2c284b237c nodeName:}" failed. No retries permitted until 2026-04-23 17:54:57.291374855 +0000 UTC m=+156.586653433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-bl4fl" (UID: "1992d43a-7589-4ec9-b815-8a2c284b237c") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:57.254786 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:57.254761 2566 scope.go:117] "RemoveContainer" containerID="6a71ba03dee221f9bd2a17cfb6108fb69eb3264565066869351d024e28c61fc0" Apr 23 17:54:57.255269 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.254963 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:54:57.308719 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:57.308685 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:57.308887 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:57.308730 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:54:57.308887 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:57.308761 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:54:57.308887 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:57.308808 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:54:57.308887 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.308851 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.308833665 +0000 UTC m=+160.604112240 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : configmap references non-existent config key: service-ca.crt Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:57.308921 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.308928 2566 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:57.308971 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.308994 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert podName:fc849c85-296b-4ebd-9bd4-27f9edfd3785 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.308977581 +0000 UTC m=+160.604256176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-jwhmv" (UID: "fc849c85-296b-4ebd-9bd4-27f9edfd3785") : secret "networking-console-plugin-cert" not found Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.308996 2566 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309012 2566 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fb885f848-mqdhm: secret "image-registry-tls" not found Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:57.309032 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309041 2566 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309045 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls podName:a7cbc07c-c629-4c31-a456-4f9bf5b328f7 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.309034558 +0000 UTC m=+160.604313169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls") pod "image-registry-7fb885f848-mqdhm" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7") : secret "image-registry-tls" not found Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.308928 2566 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309077 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert podName:72010597-3b11-4326-ad5d-3af1af12b593 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.309065931 +0000 UTC m=+160.604344507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert") pod "ingress-canary-gm4kb" (UID: "72010597-3b11-4326-ad5d-3af1af12b593") : secret "canary-serving-cert" not found Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309085 2566 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309098 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls podName:b53d5724-71d1-441d-9546-a103c6736771 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.309089366 +0000 UTC m=+160.604367950 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6dnld" (UID: "b53d5724-71d1-441d-9546-a103c6736771") : secret "samples-operator-tls" not found Apr 23 17:54:57.309108 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309114 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.309105031 +0000 UTC m=+160.604383605 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : secret "router-metrics-certs-default" not found Apr 23 17:54:57.309834 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309135 2566 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:57.309834 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:57.309167 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:54:57.309834 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309209 2566 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:54:57.309834 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309227 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls podName:1992d43a-7589-4ec9-b815-8a2c284b237c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.309216793 +0000 UTC m=+160.604495375 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-bl4fl" (UID: "1992d43a-7589-4ec9-b815-8a2c284b237c") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:54:57.309834 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:54:57.309244 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls podName:158cc267-e1dc-48e1-90d2-dba2495a9735 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.309235955 +0000 UTC m=+160.604514551 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls") pod "dns-default-ptrbw" (UID: "158cc267-e1dc-48e1-90d2-dba2495a9735") : secret "dns-default-metrics-tls" not found Apr 23 17:54:59.686918 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.686868 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz" event={"ID":"549ece9d-4598-441f-a940-cecc154fbf7e","Type":"ContainerStarted","Data":"379cd11db8f1b8ce5cdae26649d6ed62262a876572c56ac0ca09867acb482af3"} Apr 23 17:54:59.688297 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.688251 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" event={"ID":"36169332-5c35-4e99-b318-65e24dfcc370","Type":"ContainerStarted","Data":"a8fff8d9a157aff008b05e4a2f19229c46bc41bb13f65e45e8db067667bb2bac"} Apr 23 17:54:59.689905 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.689593 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" event={"ID":"df076eb4-c3f3-4cbf-8cee-a735d1572b5b","Type":"ContainerStarted","Data":"f2d88999a45ceecaac5ec77426c1944753bd6a82c04d79ac9eb53f1bd08d389c"} Apr 23 17:54:59.690906 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.690865 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp" event={"ID":"790bfe6f-76d8-43c6-a545-a921f86e66cd","Type":"ContainerStarted","Data":"c2b573560310aa10c6262720be6634bdf04b9a5aa5837c6d8e90bb11cd88da70"} Apr 23 17:54:59.692440 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.692416 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-kfcjl" event={"ID":"dd76c0f6-b46d-43a0-a71f-55a695fd6d99","Type":"ContainerStarted","Data":"1b8a9ffb490eed7546c6767f6d70b1770ac1ef0f33c3ba11dd66c5a066a22c23"} Apr 23 17:54:59.694052 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.694032 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/0.log" Apr 23 17:54:59.694193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.694070 2566 generic.go:358] "Generic (PLEG): container finished" podID="9334253b-6eff-4ad7-9cc7-5d96bdb994ad" containerID="2574bb7a364c5535c27a6d6ec1052d48b55f05cbf474d5626f636be0331635c2" exitCode=255 Apr 23 17:54:59.694193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.694099 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" event={"ID":"9334253b-6eff-4ad7-9cc7-5d96bdb994ad","Type":"ContainerDied","Data":"2574bb7a364c5535c27a6d6ec1052d48b55f05cbf474d5626f636be0331635c2"} Apr 23 17:54:59.694411 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.694395 2566 scope.go:117] "RemoveContainer" containerID="2574bb7a364c5535c27a6d6ec1052d48b55f05cbf474d5626f636be0331635c2" Apr 23 17:54:59.714197 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.714152 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-g4wkz" podStartSLOduration=43.564918604 podStartE2EDuration="48.714137828s" podCreationTimestamp="2026-04-23 17:54:11 +0000 UTC" firstStartedPulling="2026-04-23 17:54:54.126693588 +0000 UTC m=+153.421972164" lastFinishedPulling="2026-04-23 17:54:59.275912797 +0000 UTC m=+158.571191388" observedRunningTime="2026-04-23 17:54:59.714090316 +0000 UTC m=+159.009368927" watchObservedRunningTime="2026-04-23 17:54:59.714137828 +0000 UTC m=+159.009416427" Apr 23 17:54:59.743127 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.743066 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" podStartSLOduration=43.594551256 podStartE2EDuration="48.743050875s" podCreationTimestamp="2026-04-23 17:54:11 +0000 UTC" firstStartedPulling="2026-04-23 17:54:54.156600764 +0000 UTC m=+153.451879346" lastFinishedPulling="2026-04-23 17:54:59.30510039 +0000 UTC m=+158.600378965" observedRunningTime="2026-04-23 17:54:59.742372876 +0000 UTC m=+159.037651476" watchObservedRunningTime="2026-04-23 17:54:59.743050875 +0000 UTC m=+159.038329474" Apr 23 17:54:59.848286 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.848223 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-pchtp" podStartSLOduration=43.599127499 podStartE2EDuration="48.848203883s" podCreationTimestamp="2026-04-23 17:54:11 +0000 UTC" firstStartedPulling="2026-04-23 17:54:54.177838595 +0000 UTC m=+153.473117170" lastFinishedPulling="2026-04-23 17:54:59.426914973 +0000 UTC m=+158.722193554" observedRunningTime="2026-04-23 17:54:59.804099411 +0000 UTC m=+159.099378009" watchObservedRunningTime="2026-04-23 17:54:59.848203883 +0000 UTC m=+159.143482481" Apr 23 17:54:59.905167 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.905118 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-585dfdc468-kfcjl" podStartSLOduration=43.819485166 podStartE2EDuration="48.905103328s" podCreationTimestamp="2026-04-23 17:54:11 +0000 UTC" firstStartedPulling="2026-04-23 17:54:54.21406295 +0000 UTC m=+153.509341527" lastFinishedPulling="2026-04-23 17:54:59.299681097 +0000 UTC m=+158.594959689" observedRunningTime="2026-04-23 17:54:59.845196013 +0000 UTC m=+159.140474611" watchObservedRunningTime="2026-04-23 17:54:59.905103328 +0000 UTC m=+159.200381934" Apr 23 17:54:59.905383 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:54:59.905217 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" podStartSLOduration=43.819646756 podStartE2EDuration="48.905212161s" podCreationTimestamp="2026-04-23 17:54:11 +0000 UTC" firstStartedPulling="2026-04-23 17:54:54.213681392 +0000 UTC m=+153.508959967" lastFinishedPulling="2026-04-23 17:54:59.299246791 +0000 UTC m=+158.594525372" observedRunningTime="2026-04-23 17:54:59.903952949 +0000 UTC m=+159.199231546" watchObservedRunningTime="2026-04-23 17:54:59.905212161 +0000 UTC m=+159.200490801" Apr 23 17:55:00.572820 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.572779 2566 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:55:00.698862 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.698786 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 17:55:00.702859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.702144 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/0.log" Apr 23 17:55:00.702859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.702186 2566 generic.go:358] "Generic (PLEG): container finished" podID="9334253b-6eff-4ad7-9cc7-5d96bdb994ad" containerID="d114f81f9380b97243207b63728207e17fa81c3d7cf1460ec7110e4022a82fe6" exitCode=255 Apr 23 17:55:00.702859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.702839 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" event={"ID":"9334253b-6eff-4ad7-9cc7-5d96bdb994ad","Type":"ContainerDied","Data":"d114f81f9380b97243207b63728207e17fa81c3d7cf1460ec7110e4022a82fe6"} Apr 23 17:55:00.703106 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.702856 2566 scope.go:117] "RemoveContainer" containerID="d114f81f9380b97243207b63728207e17fa81c3d7cf1460ec7110e4022a82fe6" Apr 23 17:55:00.703158 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:00.703107 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-5kpdc_openshift-console-operator(9334253b-6eff-4ad7-9cc7-5d96bdb994ad)\"" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" podUID="9334253b-6eff-4ad7-9cc7-5d96bdb994ad" Apr 23 17:55:00.703343 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.703224 2566 scope.go:117] "RemoveContainer" containerID="2574bb7a364c5535c27a6d6ec1052d48b55f05cbf474d5626f636be0331635c2" Apr 23 17:55:00.767533 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.767500 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw"] Apr 23 17:55:00.770639 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.770614 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw" Apr 23 17:55:00.789929 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.789895 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.790132 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.789929 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.790132 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.789929 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-ztb7m\"" Apr 23 17:55:00.797446 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.797415 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw"] Apr 23 17:55:00.846081 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.845478 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc8hq\" (UniqueName: \"kubernetes.io/projected/45d25647-0ba1-4d11-9101-913fb12b43ac-kube-api-access-gc8hq\") pod \"migrator-74bb7799d9-mgsjw\" (UID: \"45d25647-0ba1-4d11-9101-913fb12b43ac\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw" Apr 23 17:55:00.946553 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.946524 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gc8hq\" (UniqueName: \"kubernetes.io/projected/45d25647-0ba1-4d11-9101-913fb12b43ac-kube-api-access-gc8hq\") pod \"migrator-74bb7799d9-mgsjw\" (UID: \"45d25647-0ba1-4d11-9101-913fb12b43ac\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw" Apr 23 17:55:00.965340 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:00.965256 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc8hq\" (UniqueName: \"kubernetes.io/projected/45d25647-0ba1-4d11-9101-913fb12b43ac-kube-api-access-gc8hq\") pod \"migrator-74bb7799d9-mgsjw\" (UID: \"45d25647-0ba1-4d11-9101-913fb12b43ac\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw" Apr 23 17:55:01.079425 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.079399 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw" Apr 23 17:55:01.215721 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.215645 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw"] Apr 23 17:55:01.218482 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:01.218457 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45d25647_0ba1_4d11_9101_913fb12b43ac.slice/crio-9e26330990bbf29145f09d488a13247d85afcc01b06cdf19f75151d0df72b61e WatchSource:0}: Error finding container 9e26330990bbf29145f09d488a13247d85afcc01b06cdf19f75151d0df72b61e: Status 404 returned error can't find the container with id 9e26330990bbf29145f09d488a13247d85afcc01b06cdf19f75151d0df72b61e Apr 23 17:55:01.349321 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.349271 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:55:01.349505 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.349334 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:55:01.349505 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.349353 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:01.349505 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.349399 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:55:01.349505 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349426 2566 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:01.349505 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349439 2566 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:55:01.349505 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349466 2566 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:55:01.349505 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.349433 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:01.349505 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349429 2566 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:55:01.349505 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349493 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls podName:1992d43a-7589-4ec9-b815-8a2c284b237c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:09.349472293 +0000 UTC m=+168.644750868 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-bl4fl" (UID: "1992d43a-7589-4ec9-b815-8a2c284b237c") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349530 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:09.349516712 +0000 UTC m=+168.644795287 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : secret "router-metrics-certs-default" not found Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349545 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:09.349538502 +0000 UTC m=+168.644817077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : configmap references non-existent config key: service-ca.crt Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349557 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls podName:158cc267-e1dc-48e1-90d2-dba2495a9735 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:09.349552051 +0000 UTC m=+168.644830625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls") pod "dns-default-ptrbw" (UID: "158cc267-e1dc-48e1-90d2-dba2495a9735") : secret "dns-default-metrics-tls" not found Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349574 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert podName:72010597-3b11-4326-ad5d-3af1af12b593 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:09.349568915 +0000 UTC m=+168.644847489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert") pod "ingress-canary-gm4kb" (UID: "72010597-3b11-4326-ad5d-3af1af12b593") : secret "canary-serving-cert" not found Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.349598 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.349620 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349669 2566 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349689 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert podName:fc849c85-296b-4ebd-9bd4-27f9edfd3785 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:09.349683897 +0000 UTC m=+168.644962472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-jwhmv" (UID: "fc849c85-296b-4ebd-9bd4-27f9edfd3785") : secret "networking-console-plugin-cert" not found Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349698 2566 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.349723 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349753 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls podName:b53d5724-71d1-441d-9546-a103c6736771 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:09.349736988 +0000 UTC m=+168.645015563 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-6dnld" (UID: "b53d5724-71d1-441d-9546-a103c6736771") : secret "samples-operator-tls" not found Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349795 2566 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:55:01.349878 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349802 2566 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fb885f848-mqdhm: secret "image-registry-tls" not found Apr 23 17:55:01.350288 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.349838 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls podName:a7cbc07c-c629-4c31-a456-4f9bf5b328f7 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:09.349831456 +0000 UTC m=+168.645110031 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls") pod "image-registry-7fb885f848-mqdhm" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7") : secret "image-registry-tls" not found Apr 23 17:55:01.710921 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.710780 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw" event={"ID":"45d25647-0ba1-4d11-9101-913fb12b43ac","Type":"ContainerStarted","Data":"9e26330990bbf29145f09d488a13247d85afcc01b06cdf19f75151d0df72b61e"} Apr 23 17:55:01.712295 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.712271 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 17:55:01.712668 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:01.712646 2566 scope.go:117] "RemoveContainer" containerID="d114f81f9380b97243207b63728207e17fa81c3d7cf1460ec7110e4022a82fe6" Apr 23 17:55:01.712883 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:01.712854 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-5kpdc_openshift-console-operator(9334253b-6eff-4ad7-9cc7-5d96bdb994ad)\"" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" podUID="9334253b-6eff-4ad7-9cc7-5d96bdb994ad" Apr 23 17:55:02.717163 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:02.717121 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw" event={"ID":"45d25647-0ba1-4d11-9101-913fb12b43ac","Type":"ContainerStarted","Data":"01084853a473d9842beeb5289afe38815b027fb1d6ccb27f58c3ddb93e8454c2"} Apr 23 17:55:02.717163 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:02.717160 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw" event={"ID":"45d25647-0ba1-4d11-9101-913fb12b43ac","Type":"ContainerStarted","Data":"2c69cc45547ba9cde9d814ed156e2300abdf0050f244d1362bf9ceb3398fbebf"} Apr 23 17:55:02.735645 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:02.735595 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-mgsjw" podStartSLOduration=1.921910021 podStartE2EDuration="2.735579223s" podCreationTimestamp="2026-04-23 17:55:00 +0000 UTC" firstStartedPulling="2026-04-23 17:55:01.220694774 +0000 UTC m=+160.515973352" lastFinishedPulling="2026-04-23 17:55:02.03436398 +0000 UTC m=+161.329642554" observedRunningTime="2026-04-23 17:55:02.73548155 +0000 UTC m=+162.030760149" watchObservedRunningTime="2026-04-23 17:55:02.735579223 +0000 UTC m=+162.030857820" Apr 23 17:55:03.751752 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.751720 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-865cb79987-sh4xf"] Apr 23 17:55:03.753792 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.753770 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:03.756674 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.756653 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Apr 23 17:55:03.756793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.756690 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bvxcp\"" Apr 23 17:55:03.756866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.756791 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Apr 23 17:55:03.756866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.756794 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Apr 23 17:55:03.756866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.756850 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Apr 23 17:55:03.766110 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.766084 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-865cb79987-sh4xf"] Apr 23 17:55:03.873549 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.873516 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/76e6d69b-bcae-4a6a-8fe5-9e3e4613820c-signing-key\") pod \"service-ca-865cb79987-sh4xf\" (UID: \"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c\") " pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:03.873721 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.873578 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdcbl\" (UniqueName: \"kubernetes.io/projected/76e6d69b-bcae-4a6a-8fe5-9e3e4613820c-kube-api-access-rdcbl\") pod \"service-ca-865cb79987-sh4xf\" (UID: \"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c\") " pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:03.873721 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.873700 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/76e6d69b-bcae-4a6a-8fe5-9e3e4613820c-signing-cabundle\") pod \"service-ca-865cb79987-sh4xf\" (UID: \"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c\") " pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:03.880907 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.880887 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-msx9j_88671ae9-14c3-476e-98a0-61200eda94f5/dns-node-resolver/0.log" Apr 23 17:55:03.944205 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.944175 2566 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:55:03.944205 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.944207 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:55:03.944578 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.944565 2566 scope.go:117] "RemoveContainer" containerID="d114f81f9380b97243207b63728207e17fa81c3d7cf1460ec7110e4022a82fe6" Apr 23 17:55:03.944739 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:03.944722 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-5kpdc_openshift-console-operator(9334253b-6eff-4ad7-9cc7-5d96bdb994ad)\"" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" podUID="9334253b-6eff-4ad7-9cc7-5d96bdb994ad" Apr 23 17:55:03.974581 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.974550 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/76e6d69b-bcae-4a6a-8fe5-9e3e4613820c-signing-key\") pod \"service-ca-865cb79987-sh4xf\" (UID: \"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c\") " pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:03.974702 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.974632 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdcbl\" (UniqueName: \"kubernetes.io/projected/76e6d69b-bcae-4a6a-8fe5-9e3e4613820c-kube-api-access-rdcbl\") pod \"service-ca-865cb79987-sh4xf\" (UID: \"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c\") " pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:03.974763 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.974720 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/76e6d69b-bcae-4a6a-8fe5-9e3e4613820c-signing-cabundle\") pod \"service-ca-865cb79987-sh4xf\" (UID: \"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c\") " pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:03.975348 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.975327 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/76e6d69b-bcae-4a6a-8fe5-9e3e4613820c-signing-cabundle\") pod \"service-ca-865cb79987-sh4xf\" (UID: \"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c\") " pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:03.977516 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.977501 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/76e6d69b-bcae-4a6a-8fe5-9e3e4613820c-signing-key\") pod \"service-ca-865cb79987-sh4xf\" (UID: \"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c\") " pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:03.983752 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:03.983731 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdcbl\" (UniqueName: \"kubernetes.io/projected/76e6d69b-bcae-4a6a-8fe5-9e3e4613820c-kube-api-access-rdcbl\") pod \"service-ca-865cb79987-sh4xf\" (UID: \"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c\") " pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:04.062014 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:04.061987 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-865cb79987-sh4xf" Apr 23 17:55:04.186604 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:04.186573 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-865cb79987-sh4xf"] Apr 23 17:55:04.190613 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:04.190577 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76e6d69b_bcae_4a6a_8fe5_9e3e4613820c.slice/crio-afaa5963e767b5ec4278edf494e1eeca4f097da8b824081275af9751cb9bf354 WatchSource:0}: Error finding container afaa5963e767b5ec4278edf494e1eeca4f097da8b824081275af9751cb9bf354: Status 404 returned error can't find the container with id afaa5963e767b5ec4278edf494e1eeca4f097da8b824081275af9751cb9bf354 Apr 23 17:55:04.482155 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:04.482069 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-d4hwd_bd91136a-6313-4cae-bd06-a32a9ec8e0cb/node-ca/0.log" Apr 23 17:55:04.724046 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:04.724010 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-865cb79987-sh4xf" event={"ID":"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c","Type":"ContainerStarted","Data":"86094d19900c0627bdc1780e2f0cb3e11edc8510be0b0b3be8bcdbe660f901f6"} Apr 23 17:55:04.724046 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:04.724044 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-865cb79987-sh4xf" event={"ID":"76e6d69b-bcae-4a6a-8fe5-9e3e4613820c","Type":"ContainerStarted","Data":"afaa5963e767b5ec4278edf494e1eeca4f097da8b824081275af9751cb9bf354"} Apr 23 17:55:04.743912 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:04.743810 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-865cb79987-sh4xf" podStartSLOduration=1.743793169 podStartE2EDuration="1.743793169s" podCreationTimestamp="2026-04-23 17:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:04.742863261 +0000 UTC m=+164.038141860" watchObservedRunningTime="2026-04-23 17:55:04.743793169 +0000 UTC m=+164.039071767" Apr 23 17:55:05.483755 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:05.483715 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-74bb7799d9-mgsjw_45d25647-0ba1-4d11-9101-913fb12b43ac/migrator/0.log" Apr 23 17:55:05.687377 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:05.687344 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-74bb7799d9-mgsjw_45d25647-0ba1-4d11-9101-913fb12b43ac/graceful-termination/0.log" Apr 23 17:55:05.908188 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:05.908142 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6769c5d45-bfm5m_df076eb4-c3f3-4cbf-8cee-a735d1572b5b/kube-storage-version-migrator-operator/0.log" Apr 23 17:55:06.086768 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:06.086734 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_volume-data-source-validator-7c6cbb6c87-g4wkz_549ece9d-4598-441f-a940-cecc154fbf7e/volume-data-source-validator/0.log" Apr 23 17:55:06.882599 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:06.882572 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-c75rn_19833190-ba61-4f22-b8f2-00153c34b225/csi-driver/0.log" Apr 23 17:55:07.082580 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:07.082552 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-c75rn_19833190-ba61-4f22-b8f2-00153c34b225/csi-node-driver-registrar/0.log" Apr 23 17:55:07.281801 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:07.281720 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-c75rn_19833190-ba61-4f22-b8f2-00153c34b225/csi-liveness-probe/0.log" Apr 23 17:55:09.422151 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.422118 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.422166 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.422206 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.422235 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422354 2566 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422400 2566 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.422351 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422424 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls podName:1992d43a-7589-4ec9-b815-8a2c284b237c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:25.422406994 +0000 UTC m=+184.717685573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-bl4fl" (UID: "1992d43a-7589-4ec9-b815-8a2c284b237c") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422359 2566 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422426 2566 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422495 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert podName:72010597-3b11-4326-ad5d-3af1af12b593 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:25.422439665 +0000 UTC m=+184.717718243 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert") pod "ingress-canary-gm4kb" (UID: "72010597-3b11-4326-ad5d-3af1af12b593") : secret "canary-serving-cert" not found Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422535 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:25.422506972 +0000 UTC m=+184.717785552 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : secret "router-metrics-certs-default" not found Apr 23 17:55:09.422570 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422564 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls podName:158cc267-e1dc-48e1-90d2-dba2495a9735 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:25.422554238 +0000 UTC m=+184.717832818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls") pod "dns-default-ptrbw" (UID: "158cc267-e1dc-48e1-90d2-dba2495a9735") : secret "dns-default-metrics-tls" not found Apr 23 17:55:09.423043 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.422611 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:09.423043 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.422649 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:55:09.423043 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.422679 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:55:09.423043 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422718 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle podName:274b9ba8-597e-49dd-9ba0-e1243dc7b259 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:25.422701428 +0000 UTC m=+184.717980023 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle") pod "router-default-54ff9bfc64-gddsn" (UID: "274b9ba8-597e-49dd-9ba0-e1243dc7b259") : configmap references non-existent config key: service-ca.crt Apr 23 17:55:09.423043 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422790 2566 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:55:09.423043 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:09.422836 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert podName:fc849c85-296b-4ebd-9bd4-27f9edfd3785 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:25.422827979 +0000 UTC m=+184.718106554 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-jwhmv" (UID: "fc849c85-296b-4ebd-9bd4-27f9edfd3785") : secret "networking-console-plugin-cert" not found Apr 23 17:55:09.424837 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.424818 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls\") pod \"image-registry-7fb885f848-mqdhm\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:55:09.425031 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.425012 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b53d5724-71d1-441d-9546-a103c6736771-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-6dnld\" (UID: \"b53d5724-71d1-441d-9546-a103c6736771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:55:09.450955 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.450926 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:55:09.520225 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.520197 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" Apr 23 17:55:09.579262 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.579229 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7fb885f848-mqdhm"] Apr 23 17:55:09.582996 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:09.582964 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7cbc07c_c629_4c31_a456_4f9bf5b328f7.slice/crio-68955ba76fbeaa1d2f4166d2fdceba463cd4713f9422964ef4b3393d641e09ba WatchSource:0}: Error finding container 68955ba76fbeaa1d2f4166d2fdceba463cd4713f9422964ef4b3393d641e09ba: Status 404 returned error can't find the container with id 68955ba76fbeaa1d2f4166d2fdceba463cd4713f9422964ef4b3393d641e09ba Apr 23 17:55:09.655983 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.655958 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld"] Apr 23 17:55:09.740516 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.739943 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" event={"ID":"b53d5724-71d1-441d-9546-a103c6736771","Type":"ContainerStarted","Data":"e9ac2097a4aade7c8045b09ff3d02e7d3781e7edd8b9bf0b30d6046b52e6b9d2"} Apr 23 17:55:09.742013 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.741986 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" event={"ID":"a7cbc07c-c629-4c31-a456-4f9bf5b328f7","Type":"ContainerStarted","Data":"38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d"} Apr 23 17:55:09.742116 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.742019 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" event={"ID":"a7cbc07c-c629-4c31-a456-4f9bf5b328f7","Type":"ContainerStarted","Data":"68955ba76fbeaa1d2f4166d2fdceba463cd4713f9422964ef4b3393d641e09ba"} Apr 23 17:55:09.742164 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.742123 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:55:09.763898 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:09.763829 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" podStartSLOduration=59.763805659 podStartE2EDuration="59.763805659s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:09.762491447 +0000 UTC m=+169.057770045" watchObservedRunningTime="2026-04-23 17:55:09.763805659 +0000 UTC m=+169.059084258" Apr 23 17:55:11.631265 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:11.631228 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7wdbp" Apr 23 17:55:12.249644 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:12.249611 2566 scope.go:117] "RemoveContainer" containerID="6a71ba03dee221f9bd2a17cfb6108fb69eb3264565066869351d024e28c61fc0" Apr 23 17:55:12.249922 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:12.249808 2566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_openshift-machine-config-operator(7fc0473024b4c48d914a6628102ac7a2)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podUID="7fc0473024b4c48d914a6628102ac7a2" Apr 23 17:55:12.751798 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:12.751760 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" event={"ID":"b53d5724-71d1-441d-9546-a103c6736771","Type":"ContainerStarted","Data":"fbcc15312d25e6b4b6ab8527713835e7ca6b023857777dab31170e177c03a413"} Apr 23 17:55:12.752218 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:12.751808 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" event={"ID":"b53d5724-71d1-441d-9546-a103c6736771","Type":"ContainerStarted","Data":"5c7304b94181563b487c63ce39cea608d554f1e82e5cedee80526dd7f740c932"} Apr 23 17:55:12.769328 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:12.769218 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-6dnld" podStartSLOduration=59.149953269 podStartE2EDuration="1m1.769203266s" podCreationTimestamp="2026-04-23 17:54:11 +0000 UTC" firstStartedPulling="2026-04-23 17:55:09.702745397 +0000 UTC m=+168.998023979" lastFinishedPulling="2026-04-23 17:55:12.321995387 +0000 UTC m=+171.617273976" observedRunningTime="2026-04-23 17:55:12.768237006 +0000 UTC m=+172.063515602" watchObservedRunningTime="2026-04-23 17:55:12.769203266 +0000 UTC m=+172.064481864" Apr 23 17:55:16.250366 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:16.250338 2566 scope.go:117] "RemoveContainer" containerID="d114f81f9380b97243207b63728207e17fa81c3d7cf1460ec7110e4022a82fe6" Apr 23 17:55:16.764859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:16.764835 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 17:55:16.765041 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:16.764881 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" event={"ID":"9334253b-6eff-4ad7-9cc7-5d96bdb994ad","Type":"ContainerStarted","Data":"4967e54c3ae653de6d0b6c87ac1f5d2d7fa3383f136049a37cf4f02c2d75b2ab"} Apr 23 17:55:16.765151 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:16.765131 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:55:16.769788 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:16.769764 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" Apr 23 17:55:16.787751 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:16.787677 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-9d4b6777b-5kpdc" podStartSLOduration=60.672087407 podStartE2EDuration="1m5.787647447s" podCreationTimestamp="2026-04-23 17:54:11 +0000 UTC" firstStartedPulling="2026-04-23 17:54:54.183691179 +0000 UTC m=+153.478969757" lastFinishedPulling="2026-04-23 17:54:59.299251222 +0000 UTC m=+158.594529797" observedRunningTime="2026-04-23 17:55:16.785731752 +0000 UTC m=+176.081010354" watchObservedRunningTime="2026-04-23 17:55:16.787647447 +0000 UTC m=+176.082926044" Apr 23 17:55:20.118699 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.118662 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:55:20.119066 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.118723 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:55:20.121171 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.121142 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec0108e4-36f5-4959-99b0-8fe6326c7aaa-metrics-certs\") pod \"network-metrics-daemon-96rvc\" (UID: \"ec0108e4-36f5-4959-99b0-8fe6326c7aaa\") " pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:55:20.121171 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.121156 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/bf59011d-e01e-49f9-b468-33af8f5a6489-original-pull-secret\") pod \"global-pull-secret-syncer-jhvgn\" (UID: \"bf59011d-e01e-49f9-b468-33af8f5a6489\") " pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:55:20.160129 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.160097 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96rvc" Apr 23 17:55:20.165826 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.165809 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-jhvgn" Apr 23 17:55:20.220242 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.220210 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlspw\" (UniqueName: \"kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw\") pod \"network-check-target-jd2kh\" (UID: \"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc\") " pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:55:20.230000 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.229933 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlspw\" (UniqueName: \"kubernetes.io/projected/2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc-kube-api-access-qlspw\") pod \"network-check-target-jd2kh\" (UID: \"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc\") " pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:55:20.305049 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.305024 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-jhvgn"] Apr 23 17:55:20.307576 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:20.307553 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf59011d_e01e_49f9_b468_33af8f5a6489.slice/crio-d0d988e814c4af64176870dbd2e92873460546fba5a0a9da76eb04d7377263f9 WatchSource:0}: Error finding container d0d988e814c4af64176870dbd2e92873460546fba5a0a9da76eb04d7377263f9: Status 404 returned error can't find the container with id d0d988e814c4af64176870dbd2e92873460546fba5a0a9da76eb04d7377263f9 Apr 23 17:55:20.320659 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.320633 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-96rvc"] Apr 23 17:55:20.323490 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:20.323463 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec0108e4_36f5_4959_99b0_8fe6326c7aaa.slice/crio-c71ecade9b0c39329f505c0bab1062a56ef84b86c760176cb6dbbd6e85b90126 WatchSource:0}: Error finding container c71ecade9b0c39329f505c0bab1062a56ef84b86c760176cb6dbbd6e85b90126: Status 404 returned error can't find the container with id c71ecade9b0c39329f505c0bab1062a56ef84b86c760176cb6dbbd6e85b90126 Apr 23 17:55:20.470009 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.469933 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:55:20.609864 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.609833 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-jd2kh"] Apr 23 17:55:20.612564 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:20.612542 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b7df6cc_2be6_40b1_b7dd_9d8f310a72dc.slice/crio-d1448c529298469c5c73eb8046557dd23daa5ec2fc45a1c78801e895706a86ef WatchSource:0}: Error finding container d1448c529298469c5c73eb8046557dd23daa5ec2fc45a1c78801e895706a86ef: Status 404 returned error can't find the container with id d1448c529298469c5c73eb8046557dd23daa5ec2fc45a1c78801e895706a86ef Apr 23 17:55:20.777498 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.777416 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-96rvc" event={"ID":"ec0108e4-36f5-4959-99b0-8fe6326c7aaa","Type":"ContainerStarted","Data":"c71ecade9b0c39329f505c0bab1062a56ef84b86c760176cb6dbbd6e85b90126"} Apr 23 17:55:20.778496 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.778464 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-jhvgn" event={"ID":"bf59011d-e01e-49f9-b468-33af8f5a6489","Type":"ContainerStarted","Data":"d0d988e814c4af64176870dbd2e92873460546fba5a0a9da76eb04d7377263f9"} Apr 23 17:55:20.779914 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.779889 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-jd2kh" event={"ID":"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc","Type":"ContainerStarted","Data":"3f08cf01aefb8f45748e9356f2dac502ac97165993d70274256cee0d0f20fcc7"} Apr 23 17:55:20.780021 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.779920 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-jd2kh" event={"ID":"2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc","Type":"ContainerStarted","Data":"d1448c529298469c5c73eb8046557dd23daa5ec2fc45a1c78801e895706a86ef"} Apr 23 17:55:20.780021 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.780018 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:55:20.814582 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:20.814512 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-jd2kh" podStartSLOduration=68.814491525 podStartE2EDuration="1m8.814491525s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:20.814046643 +0000 UTC m=+180.109325241" watchObservedRunningTime="2026-04-23 17:55:20.814491525 +0000 UTC m=+180.109770126" Apr 23 17:55:22.788682 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:22.788644 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-96rvc" event={"ID":"ec0108e4-36f5-4959-99b0-8fe6326c7aaa","Type":"ContainerStarted","Data":"343b020937f62356f9303fa0111114a468bfdd90f33663d44ac49be4d701ca04"} Apr 23 17:55:22.788682 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:22.788686 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-96rvc" event={"ID":"ec0108e4-36f5-4959-99b0-8fe6326c7aaa","Type":"ContainerStarted","Data":"5c4197a6609db680550a515eafe1165321aad03a7820403942678de695d37681"} Apr 23 17:55:23.250681 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:23.250599 2566 scope.go:117] "RemoveContainer" containerID="6a71ba03dee221f9bd2a17cfb6108fb69eb3264565066869351d024e28c61fc0" Apr 23 17:55:24.799513 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:24.799482 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 17:55:24.799938 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:24.799832 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" event={"ID":"7fc0473024b4c48d914a6628102ac7a2","Type":"ContainerStarted","Data":"674a70c773ed0b01b4ed6f254d74b50824d6fc51b8bd7eb4336ff25f58695cc3"} Apr 23 17:55:24.801096 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:24.801074 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-jhvgn" event={"ID":"bf59011d-e01e-49f9-b468-33af8f5a6489","Type":"ContainerStarted","Data":"a740c56504c5825ff371b5407afd306594477fe2046ad1bf07b825ef756cd0e5"} Apr 23 17:55:24.818148 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:24.818103 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal" podStartSLOduration=69.818091476 podStartE2EDuration="1m9.818091476s" podCreationTimestamp="2026-04-23 17:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:24.817512147 +0000 UTC m=+184.112790743" watchObservedRunningTime="2026-04-23 17:55:24.818091476 +0000 UTC m=+184.113370073" Apr 23 17:55:24.818477 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:24.818437 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-96rvc" podStartSLOduration=71.303934517 podStartE2EDuration="1m12.81842715s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:55:20.325116461 +0000 UTC m=+179.620395035" lastFinishedPulling="2026-04-23 17:55:21.839609087 +0000 UTC m=+181.134887668" observedRunningTime="2026-04-23 17:55:22.812509323 +0000 UTC m=+182.107787919" watchObservedRunningTime="2026-04-23 17:55:24.81842715 +0000 UTC m=+184.113705747" Apr 23 17:55:24.838992 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:24.838952 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-jhvgn" podStartSLOduration=66.143445919 podStartE2EDuration="1m9.838938388s" podCreationTimestamp="2026-04-23 17:54:15 +0000 UTC" firstStartedPulling="2026-04-23 17:55:20.30916618 +0000 UTC m=+179.604444755" lastFinishedPulling="2026-04-23 17:55:24.004658646 +0000 UTC m=+183.299937224" observedRunningTime="2026-04-23 17:55:24.838441953 +0000 UTC m=+184.133720551" watchObservedRunningTime="2026-04-23 17:55:24.838938388 +0000 UTC m=+184.134217023" Apr 23 17:55:25.469083 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.469036 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:55:25.469256 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.469102 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:55:25.469256 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.469136 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:25.469256 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.469194 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:55:25.469256 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.469234 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:25.469517 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.469269 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:55:25.470216 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.470069 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274b9ba8-597e-49dd-9ba0-e1243dc7b259-service-ca-bundle\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:25.471927 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.471901 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72010597-3b11-4326-ad5d-3af1af12b593-cert\") pod \"ingress-canary-gm4kb\" (UID: \"72010597-3b11-4326-ad5d-3af1af12b593\") " pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:55:25.472064 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.471991 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/274b9ba8-597e-49dd-9ba0-e1243dc7b259-metrics-certs\") pod \"router-default-54ff9bfc64-gddsn\" (UID: \"274b9ba8-597e-49dd-9ba0-e1243dc7b259\") " pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:25.472064 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.471999 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/fc849c85-296b-4ebd-9bd4-27f9edfd3785-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-jwhmv\" (UID: \"fc849c85-296b-4ebd-9bd4-27f9edfd3785\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:55:25.472229 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.472208 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1992d43a-7589-4ec9-b815-8a2c284b237c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-bl4fl\" (UID: \"1992d43a-7589-4ec9-b815-8a2c284b237c\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:55:25.472467 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.472447 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/158cc267-e1dc-48e1-90d2-dba2495a9735-metrics-tls\") pod \"dns-default-ptrbw\" (UID: \"158cc267-e1dc-48e1-90d2-dba2495a9735\") " pod="openshift-dns/dns-default-ptrbw" Apr 23 17:55:25.689495 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.689463 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-jll6l\"" Apr 23 17:55:25.695036 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.695017 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-dockercfg-nhbht\"" Apr 23 17:55:25.698070 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.698042 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ptrbw" Apr 23 17:55:25.703363 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.703345 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" Apr 23 17:55:25.704538 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.704521 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-fd7p8\"" Apr 23 17:55:25.712492 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.712468 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:25.739319 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.739220 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-r4kqt\"" Apr 23 17:55:25.746676 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.746648 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" Apr 23 17:55:25.759014 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.758913 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-xzdf6\"" Apr 23 17:55:25.767508 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.767023 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gm4kb" Apr 23 17:55:25.865277 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.865220 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ptrbw"] Apr 23 17:55:25.866290 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:25.866243 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod158cc267_e1dc_48e1_90d2_dba2495a9735.slice/crio-bf18c77245d15c6b485685c57160a8295a8c93aa9a338895855dc6e32f13b3e4 WatchSource:0}: Error finding container bf18c77245d15c6b485685c57160a8295a8c93aa9a338895855dc6e32f13b3e4: Status 404 returned error can't find the container with id bf18c77245d15c6b485685c57160a8295a8c93aa9a338895855dc6e32f13b3e4 Apr 23 17:55:25.886890 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.886850 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl"] Apr 23 17:55:25.891924 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:25.891897 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1992d43a_7589_4ec9_b815_8a2c284b237c.slice/crio-a0f1c05ff8e879b01d67769de9488fde313b9065ea2b9e5720de1f3a3f14f2f4 WatchSource:0}: Error finding container a0f1c05ff8e879b01d67769de9488fde313b9065ea2b9e5720de1f3a3f14f2f4: Status 404 returned error can't find the container with id a0f1c05ff8e879b01d67769de9488fde313b9065ea2b9e5720de1f3a3f14f2f4 Apr 23 17:55:25.921608 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.918712 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-54ff9bfc64-gddsn"] Apr 23 17:55:25.922044 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:25.922001 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod274b9ba8_597e_49dd_9ba0_e1243dc7b259.slice/crio-d780135a105b6616cb3d54f7fc94e59fbffc7f85db1e04846db9cf8c91c084d1 WatchSource:0}: Error finding container d780135a105b6616cb3d54f7fc94e59fbffc7f85db1e04846db9cf8c91c084d1: Status 404 returned error can't find the container with id d780135a105b6616cb3d54f7fc94e59fbffc7f85db1e04846db9cf8c91c084d1 Apr 23 17:55:25.941725 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.941687 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv"] Apr 23 17:55:25.951319 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:25.951280 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc849c85_296b_4ebd_9bd4_27f9edfd3785.slice/crio-a888141edd7613169e70d6474b44c4ea5d7370773b87a2773864c2ab82b79551 WatchSource:0}: Error finding container a888141edd7613169e70d6474b44c4ea5d7370773b87a2773864c2ab82b79551: Status 404 returned error can't find the container with id a888141edd7613169e70d6474b44c4ea5d7370773b87a2773864c2ab82b79551 Apr 23 17:55:25.970639 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:25.970614 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gm4kb"] Apr 23 17:55:25.973976 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:25.973923 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72010597_3b11_4326_ad5d_3af1af12b593.slice/crio-5f12e85d5fc3564b4128cc3063fce0c75d28b650b258836341e144f68966d6d1 WatchSource:0}: Error finding container 5f12e85d5fc3564b4128cc3063fce0c75d28b650b258836341e144f68966d6d1: Status 404 returned error can't find the container with id 5f12e85d5fc3564b4128cc3063fce0c75d28b650b258836341e144f68966d6d1 Apr 23 17:55:26.813222 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:26.813174 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ptrbw" event={"ID":"158cc267-e1dc-48e1-90d2-dba2495a9735","Type":"ContainerStarted","Data":"bf18c77245d15c6b485685c57160a8295a8c93aa9a338895855dc6e32f13b3e4"} Apr 23 17:55:26.816652 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:26.816613 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" event={"ID":"fc849c85-296b-4ebd-9bd4-27f9edfd3785","Type":"ContainerStarted","Data":"a888141edd7613169e70d6474b44c4ea5d7370773b87a2773864c2ab82b79551"} Apr 23 17:55:26.819138 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:26.819103 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gm4kb" event={"ID":"72010597-3b11-4326-ad5d-3af1af12b593","Type":"ContainerStarted","Data":"5f12e85d5fc3564b4128cc3063fce0c75d28b650b258836341e144f68966d6d1"} Apr 23 17:55:26.820995 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:26.820962 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" event={"ID":"1992d43a-7589-4ec9-b815-8a2c284b237c","Type":"ContainerStarted","Data":"a0f1c05ff8e879b01d67769de9488fde313b9065ea2b9e5720de1f3a3f14f2f4"} Apr 23 17:55:26.824732 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:26.824371 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-54ff9bfc64-gddsn" event={"ID":"274b9ba8-597e-49dd-9ba0-e1243dc7b259","Type":"ContainerStarted","Data":"dc97d9530c1cf9357aa89c6100def306e9a0ae4d9a9ce2558a333dcbf7a706d3"} Apr 23 17:55:26.824732 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:26.824409 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-54ff9bfc64-gddsn" event={"ID":"274b9ba8-597e-49dd-9ba0-e1243dc7b259","Type":"ContainerStarted","Data":"d780135a105b6616cb3d54f7fc94e59fbffc7f85db1e04846db9cf8c91c084d1"} Apr 23 17:55:26.850948 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:26.850688 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-54ff9bfc64-gddsn" podStartSLOduration=75.850665279 podStartE2EDuration="1m15.850665279s" podCreationTimestamp="2026-04-23 17:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:26.848611674 +0000 UTC m=+186.143890276" watchObservedRunningTime="2026-04-23 17:55:26.850665279 +0000 UTC m=+186.145943854" Apr 23 17:55:27.712803 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:27.712768 2566 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:27.715627 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:27.715599 2566 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:27.828393 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:27.828344 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:27.829771 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:27.829748 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-54ff9bfc64-gddsn" Apr 23 17:55:28.833222 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:28.833184 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" event={"ID":"fc849c85-296b-4ebd-9bd4-27f9edfd3785","Type":"ContainerStarted","Data":"1437468de1c2eb68b546ef327e65bd35c44b9db66818b034a59b83837dd679d4"} Apr 23 17:55:28.834635 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:28.834600 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gm4kb" event={"ID":"72010597-3b11-4326-ad5d-3af1af12b593","Type":"ContainerStarted","Data":"1debaacda0658820761e7610dd1879d081174b7c636461e71f82e0655472ff0f"} Apr 23 17:55:28.838483 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:28.838458 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" event={"ID":"1992d43a-7589-4ec9-b815-8a2c284b237c","Type":"ContainerStarted","Data":"be6f6e155e138b112cb883afe077d2f6e2b6eef3494d1e18ad1ddf7e2fd616fe"} Apr 23 17:55:28.840363 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:28.840341 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ptrbw" event={"ID":"158cc267-e1dc-48e1-90d2-dba2495a9735","Type":"ContainerStarted","Data":"6df801e5d36c65a2907d2ac91b0837b898c45d296cf155d4e17f72080d6260d3"} Apr 23 17:55:28.840456 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:28.840370 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ptrbw" event={"ID":"158cc267-e1dc-48e1-90d2-dba2495a9735","Type":"ContainerStarted","Data":"b3b87e54eb253c4e5725dbe35e350b1d33948cf4e921468e5d8f6242c3763555"} Apr 23 17:55:28.930453 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:28.930405 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-bl4fl" podStartSLOduration=75.346775008 podStartE2EDuration="1m17.930389034s" podCreationTimestamp="2026-04-23 17:54:11 +0000 UTC" firstStartedPulling="2026-04-23 17:55:25.894220957 +0000 UTC m=+185.189499538" lastFinishedPulling="2026-04-23 17:55:28.477834982 +0000 UTC m=+187.773113564" observedRunningTime="2026-04-23 17:55:28.929859807 +0000 UTC m=+188.225138404" watchObservedRunningTime="2026-04-23 17:55:28.930389034 +0000 UTC m=+188.225667632" Apr 23 17:55:28.930594 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:28.930506 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-cb95c66f6-jwhmv" podStartSLOduration=75.410660291 podStartE2EDuration="1m17.930498949s" podCreationTimestamp="2026-04-23 17:54:11 +0000 UTC" firstStartedPulling="2026-04-23 17:55:25.953072648 +0000 UTC m=+185.248351224" lastFinishedPulling="2026-04-23 17:55:28.472911306 +0000 UTC m=+187.768189882" observedRunningTime="2026-04-23 17:55:28.884631911 +0000 UTC m=+188.179910509" watchObservedRunningTime="2026-04-23 17:55:28.930498949 +0000 UTC m=+188.225777548" Apr 23 17:55:28.957315 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:28.957269 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ptrbw" podStartSLOduration=33.353391487 podStartE2EDuration="35.957254021s" podCreationTimestamp="2026-04-23 17:54:53 +0000 UTC" firstStartedPulling="2026-04-23 17:55:25.869037442 +0000 UTC m=+185.164316017" lastFinishedPulling="2026-04-23 17:55:28.47289996 +0000 UTC m=+187.768178551" observedRunningTime="2026-04-23 17:55:28.95675692 +0000 UTC m=+188.252035518" watchObservedRunningTime="2026-04-23 17:55:28.957254021 +0000 UTC m=+188.252532620" Apr 23 17:55:28.981072 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:28.980987 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-gm4kb" podStartSLOduration=33.482051395 podStartE2EDuration="35.980977904s" podCreationTimestamp="2026-04-23 17:54:53 +0000 UTC" firstStartedPulling="2026-04-23 17:55:25.97579368 +0000 UTC m=+185.271072256" lastFinishedPulling="2026-04-23 17:55:28.474720189 +0000 UTC m=+187.769998765" observedRunningTime="2026-04-23 17:55:28.980164815 +0000 UTC m=+188.275443412" watchObservedRunningTime="2026-04-23 17:55:28.980977904 +0000 UTC m=+188.276256501" Apr 23 17:55:29.456152 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:29.456098 2566 patch_prober.go:28] interesting pod/image-registry-7fb885f848-mqdhm container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 23 17:55:29.456335 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:29.456178 2566 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" podUID="a7cbc07c-c629-4c31-a456-4f9bf5b328f7" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 17:55:29.844351 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:29.844292 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-ptrbw" Apr 23 17:55:30.748969 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:30.748936 2566 patch_prober.go:28] interesting pod/image-registry-7fb885f848-mqdhm container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 23 17:55:30.749163 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:30.749002 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" podUID="a7cbc07c-c629-4c31-a456-4f9bf5b328f7" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 17:55:35.396732 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.396690 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-6bcc868b7-8dvlx"] Apr 23 17:55:35.400213 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.400183 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6bcc868b7-8dvlx" Apr 23 17:55:35.400902 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.400874 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-77f479db9b-7zsd9"] Apr 23 17:55:35.402770 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.402749 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.403420 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.403396 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 23 17:55:35.403541 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.403419 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-2p54d\"" Apr 23 17:55:35.404175 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.404157 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 23 17:55:35.405420 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.405399 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 23 17:55:35.405545 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.405530 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 23 17:55:35.407022 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.406997 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 23 17:55:35.410160 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.410141 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-5lt52\"" Apr 23 17:55:35.410257 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.410241 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 23 17:55:35.411569 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.411547 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-6bcc868b7-8dvlx"] Apr 23 17:55:35.414533 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.414512 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 23 17:55:35.426206 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.426178 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77f479db9b-7zsd9"] Apr 23 17:55:35.450591 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.450553 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-serving-cert\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.450692 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.450607 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfj8k\" (UniqueName: \"kubernetes.io/projected/f54a175e-d59b-46e9-b245-82f3b11123d9-kube-api-access-hfj8k\") pod \"downloads-6bcc868b7-8dvlx\" (UID: \"f54a175e-d59b-46e9-b245-82f3b11123d9\") " pod="openshift-console/downloads-6bcc868b7-8dvlx" Apr 23 17:55:35.450742 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.450684 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-config\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.450742 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.450711 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-service-ca\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.450808 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.450745 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-oauth-serving-cert\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.450808 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.450773 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-oauth-config\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.450808 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.450801 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7gkf\" (UniqueName: \"kubernetes.io/projected/aa056e98-492b-4b91-86a6-f5ab60987ce5-kube-api-access-w7gkf\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.535957 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.535922 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h"] Apr 23 17:55:35.537957 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.537935 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-64p9d"] Apr 23 17:55:35.538089 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.538072 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h" Apr 23 17:55:35.539921 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.539899 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.542682 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.542661 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 23 17:55:35.544884 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.544861 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 23 17:55:35.545354 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.545331 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-9q5jf\"" Apr 23 17:55:35.549288 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.549270 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-dockercfg-7n8zp\"" Apr 23 17:55:35.549434 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.549417 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-tls\"" Apr 23 17:55:35.551322 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.551282 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-serving-cert\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.551416 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.551343 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hfj8k\" (UniqueName: \"kubernetes.io/projected/f54a175e-d59b-46e9-b245-82f3b11123d9-kube-api-access-hfj8k\") pod \"downloads-6bcc868b7-8dvlx\" (UID: \"f54a175e-d59b-46e9-b245-82f3b11123d9\") " pod="openshift-console/downloads-6bcc868b7-8dvlx" Apr 23 17:55:35.551416 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.551390 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-config\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.551416 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.551413 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-service-ca\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.551564 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.551463 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-oauth-serving-cert\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.551564 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.551497 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-oauth-config\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.551564 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.551535 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w7gkf\" (UniqueName: \"kubernetes.io/projected/aa056e98-492b-4b91-86a6-f5ab60987ce5-kube-api-access-w7gkf\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.552201 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.552181 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-config\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.552282 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.552180 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-service-ca\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.552282 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.552180 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-oauth-serving-cert\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.553950 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.553931 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-serving-cert\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.554226 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.554208 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-oauth-config\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.559232 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.559211 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h"] Apr 23 17:55:35.584928 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.584897 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfj8k\" (UniqueName: \"kubernetes.io/projected/f54a175e-d59b-46e9-b245-82f3b11123d9-kube-api-access-hfj8k\") pod \"downloads-6bcc868b7-8dvlx\" (UID: \"f54a175e-d59b-46e9-b245-82f3b11123d9\") " pod="openshift-console/downloads-6bcc868b7-8dvlx" Apr 23 17:55:35.592945 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.592917 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7gkf\" (UniqueName: \"kubernetes.io/projected/aa056e98-492b-4b91-86a6-f5ab60987ce5-kube-api-access-w7gkf\") pod \"console-77f479db9b-7zsd9\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.593683 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.593661 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-64p9d"] Apr 23 17:55:35.651944 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.651862 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.651944 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.651916 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8d0b2147-611c-458a-9d92-eae8e9e49ad0-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-qqj6h\" (UID: \"8d0b2147-611c-458a-9d92-eae8e9e49ad0\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h" Apr 23 17:55:35.652107 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.651948 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.652107 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.651973 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzp8c\" (UniqueName: \"kubernetes.io/projected/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-kube-api-access-pzp8c\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.652107 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.651999 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-crio-socket\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.652199 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.652110 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-data-volume\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.712769 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.712688 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6bcc868b7-8dvlx" Apr 23 17:55:35.720293 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.720257 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:35.753365 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.753276 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-data-volume\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.753516 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.753379 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.753516 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.753415 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8d0b2147-611c-458a-9d92-eae8e9e49ad0-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-qqj6h\" (UID: \"8d0b2147-611c-458a-9d92-eae8e9e49ad0\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h" Apr 23 17:55:35.753516 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.753446 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.753516 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.753476 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pzp8c\" (UniqueName: \"kubernetes.io/projected/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-kube-api-access-pzp8c\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.753516 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.753513 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-crio-socket\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.753672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.753645 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-crio-socket\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.753716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.753693 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-data-volume\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.754352 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.754275 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.756242 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.756203 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.756851 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.756806 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8d0b2147-611c-458a-9d92-eae8e9e49ad0-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-qqj6h\" (UID: \"8d0b2147-611c-458a-9d92-eae8e9e49ad0\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h" Apr 23 17:55:35.765893 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.765850 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzp8c\" (UniqueName: \"kubernetes.io/projected/e8184bdb-fe3d-45b0-9c77-72fa68eb4767-kube-api-access-pzp8c\") pod \"insights-runtime-extractor-64p9d\" (UID: \"e8184bdb-fe3d-45b0-9c77-72fa68eb4767\") " pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.848667 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.848637 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h" Apr 23 17:55:35.854796 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.854766 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-64p9d" Apr 23 17:55:35.859051 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.859022 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-6bcc868b7-8dvlx"] Apr 23 17:55:35.879331 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:35.879275 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77f479db9b-7zsd9"] Apr 23 17:55:35.882382 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:35.882332 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa056e98_492b_4b91_86a6_f5ab60987ce5.slice/crio-9d3666f7dc5f61d09bb10bbbd14005f1b32a09423cbb98e1e76a06e1d825ae68 WatchSource:0}: Error finding container 9d3666f7dc5f61d09bb10bbbd14005f1b32a09423cbb98e1e76a06e1d825ae68: Status 404 returned error can't find the container with id 9d3666f7dc5f61d09bb10bbbd14005f1b32a09423cbb98e1e76a06e1d825ae68 Apr 23 17:55:36.002429 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.002407 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h"] Apr 23 17:55:36.004221 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:36.004195 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d0b2147_611c_458a_9d92_eae8e9e49ad0.slice/crio-460ac6d4c214009883461a5ebaf7dea8f429a43c8af9985728a5b561786cb033 WatchSource:0}: Error finding container 460ac6d4c214009883461a5ebaf7dea8f429a43c8af9985728a5b561786cb033: Status 404 returned error can't find the container with id 460ac6d4c214009883461a5ebaf7dea8f429a43c8af9985728a5b561786cb033 Apr 23 17:55:36.010426 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.010403 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-6567d97b5d-pgfhh"] Apr 23 17:55:36.013893 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.013871 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.023542 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.023522 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 23 17:55:36.029513 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.029491 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-64p9d"] Apr 23 17:55:36.030917 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.030900 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6567d97b5d-pgfhh"] Apr 23 17:55:36.033159 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:36.033136 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8184bdb_fe3d_45b0_9c77_72fa68eb4767.slice/crio-c74641f2976ba29e1cd32ff04a425a01e1d59cf1f553f19a6ede6f6b63c0d6da WatchSource:0}: Error finding container c74641f2976ba29e1cd32ff04a425a01e1d59cf1f553f19a6ede6f6b63c0d6da: Status 404 returned error can't find the container with id c74641f2976ba29e1cd32ff04a425a01e1d59cf1f553f19a6ede6f6b63c0d6da Apr 23 17:55:36.056830 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.056805 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-serving-cert\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.056944 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.056842 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhmn5\" (UniqueName: \"kubernetes.io/projected/1a661fb3-1486-4d61-8791-258fdf538a89-kube-api-access-vhmn5\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.056944 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.056907 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-service-ca\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.056944 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.056939 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-oauth-config\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.057101 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.056963 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-trusted-ca-bundle\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.057101 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.057005 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-console-config\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.057101 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.057063 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-oauth-serving-cert\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.157866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.157799 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-trusted-ca-bundle\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.157866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.157841 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-console-config\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.158037 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.157875 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-oauth-serving-cert\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.158037 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.157895 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-serving-cert\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.158037 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.157913 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vhmn5\" (UniqueName: \"kubernetes.io/projected/1a661fb3-1486-4d61-8791-258fdf538a89-kube-api-access-vhmn5\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.158174 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.158062 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-service-ca\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.158174 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.158113 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-oauth-config\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.158785 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.158683 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-oauth-serving-cert\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.158785 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.158715 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-console-config\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.159149 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.159124 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-service-ca\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.159219 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.159165 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-trusted-ca-bundle\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.160569 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.160551 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-serving-cert\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.160659 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.160588 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-oauth-config\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.169958 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.169940 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhmn5\" (UniqueName: \"kubernetes.io/projected/1a661fb3-1486-4d61-8791-258fdf538a89-kube-api-access-vhmn5\") pod \"console-6567d97b5d-pgfhh\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.324566 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.324523 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:36.508962 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.508925 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6567d97b5d-pgfhh"] Apr 23 17:55:36.514449 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:36.514398 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a661fb3_1486_4d61_8791_258fdf538a89.slice/crio-bc33747dac759d651623bd426d0fcd8c4b65d14b008835848d7c544590ee645a WatchSource:0}: Error finding container bc33747dac759d651623bd426d0fcd8c4b65d14b008835848d7c544590ee645a: Status 404 returned error can't find the container with id bc33747dac759d651623bd426d0fcd8c4b65d14b008835848d7c544590ee645a Apr 23 17:55:36.868548 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.868508 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6567d97b5d-pgfhh" event={"ID":"1a661fb3-1486-4d61-8791-258fdf538a89","Type":"ContainerStarted","Data":"bc33747dac759d651623bd426d0fcd8c4b65d14b008835848d7c544590ee645a"} Apr 23 17:55:36.870360 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.870263 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6bcc868b7-8dvlx" event={"ID":"f54a175e-d59b-46e9-b245-82f3b11123d9","Type":"ContainerStarted","Data":"4b20ad2a4be9bfd37969f1dddd95726a3819b6c08f121300c733021268c0e93d"} Apr 23 17:55:36.872672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.872600 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-64p9d" event={"ID":"e8184bdb-fe3d-45b0-9c77-72fa68eb4767","Type":"ContainerStarted","Data":"e7e8566efbad402842e7b1bde4ec04ab39cdc42c02e74d366a658bcb057b656c"} Apr 23 17:55:36.872672 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.872634 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-64p9d" event={"ID":"e8184bdb-fe3d-45b0-9c77-72fa68eb4767","Type":"ContainerStarted","Data":"c74641f2976ba29e1cd32ff04a425a01e1d59cf1f553f19a6ede6f6b63c0d6da"} Apr 23 17:55:36.874677 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.874620 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h" event={"ID":"8d0b2147-611c-458a-9d92-eae8e9e49ad0","Type":"ContainerStarted","Data":"460ac6d4c214009883461a5ebaf7dea8f429a43c8af9985728a5b561786cb033"} Apr 23 17:55:36.876133 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:36.876102 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77f479db9b-7zsd9" event={"ID":"aa056e98-492b-4b91-86a6-f5ab60987ce5","Type":"ContainerStarted","Data":"9d3666f7dc5f61d09bb10bbbd14005f1b32a09423cbb98e1e76a06e1d825ae68"} Apr 23 17:55:37.882476 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:37.882439 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-64p9d" event={"ID":"e8184bdb-fe3d-45b0-9c77-72fa68eb4767","Type":"ContainerStarted","Data":"300a02a9c9411ec8de0c4f43f0c9712c0650ee414992b7e194177d01cb91a06e"} Apr 23 17:55:37.884386 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:37.884348 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h" event={"ID":"8d0b2147-611c-458a-9d92-eae8e9e49ad0","Type":"ContainerStarted","Data":"5222a927e0b3bdff1d31c4de0adcf41ead4e3d31735ddae2a2c3ba78384dec83"} Apr 23 17:55:37.884859 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:37.884831 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h" Apr 23 17:55:37.892385 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:37.892361 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h" Apr 23 17:55:37.922437 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:37.922139 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-qqj6h" podStartSLOduration=1.786408719 podStartE2EDuration="2.922122757s" podCreationTimestamp="2026-04-23 17:55:35 +0000 UTC" firstStartedPulling="2026-04-23 17:55:36.006045656 +0000 UTC m=+195.301324232" lastFinishedPulling="2026-04-23 17:55:37.141759681 +0000 UTC m=+196.437038270" observedRunningTime="2026-04-23 17:55:37.920181223 +0000 UTC m=+197.215459821" watchObservedRunningTime="2026-04-23 17:55:37.922122757 +0000 UTC m=+197.217401357" Apr 23 17:55:38.179467 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.177847 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-z9wc5"] Apr 23 17:55:38.181323 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.180684 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.183678 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.182858 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-tls\"" Apr 23 17:55:38.183678 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.183093 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-kube-rbac-proxy-config\"" Apr 23 17:55:38.183678 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.183433 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-dockercfg-tjvsd\"" Apr 23 17:55:38.183678 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.183638 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 23 17:55:38.193064 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.192731 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-z9wc5"] Apr 23 17:55:38.277228 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.276973 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.277228 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.277032 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.277228 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.277076 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f246f\" (UniqueName: \"kubernetes.io/projected/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-kube-api-access-f246f\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.277228 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.277192 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.380496 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.378536 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.380496 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.378600 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.380496 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.378636 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.380496 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.378670 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f246f\" (UniqueName: \"kubernetes.io/projected/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-kube-api-access-f246f\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.380496 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.379730 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.380496 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:38.379838 2566 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Apr 23 17:55:38.380496 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:38.379900 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-prometheus-operator-tls podName:9d01bf3e-4061-4f32-a69a-11d933d7b9bc nodeName:}" failed. No retries permitted until 2026-04-23 17:55:38.879874134 +0000 UTC m=+198.175152712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-prometheus-operator-tls") pod "prometheus-operator-5676c8c784-z9wc5" (UID: "9d01bf3e-4061-4f32-a69a-11d933d7b9bc") : secret "prometheus-operator-tls" not found Apr 23 17:55:38.382995 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.382946 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.399267 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.399200 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f246f\" (UniqueName: \"kubernetes.io/projected/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-kube-api-access-f246f\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.884801 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.884755 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:38.890319 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:38.889945 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d01bf3e-4061-4f32-a69a-11d933d7b9bc-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-z9wc5\" (UID: \"9d01bf3e-4061-4f32-a69a-11d933d7b9bc\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:39.096187 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:39.096147 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" Apr 23 17:55:39.456531 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:39.456493 2566 patch_prober.go:28] interesting pod/image-registry-7fb885f848-mqdhm container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 23 17:55:39.456712 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:39.456565 2566 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" podUID="a7cbc07c-c629-4c31-a456-4f9bf5b328f7" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 17:55:39.848625 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:39.848595 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ptrbw" Apr 23 17:55:40.106675 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:40.106560 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-z9wc5"] Apr 23 17:55:40.110440 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:40.110406 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d01bf3e_4061_4f32_a69a_11d933d7b9bc.slice/crio-18ab202e468a35a665ebe36b0a5ff2a4293720bb12cbb93ab223a144ad33999f WatchSource:0}: Error finding container 18ab202e468a35a665ebe36b0a5ff2a4293720bb12cbb93ab223a144ad33999f: Status 404 returned error can't find the container with id 18ab202e468a35a665ebe36b0a5ff2a4293720bb12cbb93ab223a144ad33999f Apr 23 17:55:40.750131 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:40.750049 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:55:40.896543 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:40.896488 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" event={"ID":"9d01bf3e-4061-4f32-a69a-11d933d7b9bc","Type":"ContainerStarted","Data":"18ab202e468a35a665ebe36b0a5ff2a4293720bb12cbb93ab223a144ad33999f"} Apr 23 17:55:40.899061 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:40.899029 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-64p9d" event={"ID":"e8184bdb-fe3d-45b0-9c77-72fa68eb4767","Type":"ContainerStarted","Data":"88b6db259455996c648952d65ab1e2bcff5ccd8bb042765a3b453f23f2d730c4"} Apr 23 17:55:40.903777 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:40.903745 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77f479db9b-7zsd9" event={"ID":"aa056e98-492b-4b91-86a6-f5ab60987ce5","Type":"ContainerStarted","Data":"e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2"} Apr 23 17:55:40.905514 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:40.905492 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6567d97b5d-pgfhh" event={"ID":"1a661fb3-1486-4d61-8791-258fdf538a89","Type":"ContainerStarted","Data":"ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176"} Apr 23 17:55:40.920970 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:40.920911 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-64p9d" podStartSLOduration=2.07422743 podStartE2EDuration="5.920895038s" podCreationTimestamp="2026-04-23 17:55:35 +0000 UTC" firstStartedPulling="2026-04-23 17:55:36.097054945 +0000 UTC m=+195.392333520" lastFinishedPulling="2026-04-23 17:55:39.94372254 +0000 UTC m=+199.239001128" observedRunningTime="2026-04-23 17:55:40.918995964 +0000 UTC m=+200.214274562" watchObservedRunningTime="2026-04-23 17:55:40.920895038 +0000 UTC m=+200.216173636" Apr 23 17:55:40.945855 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:40.945809 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-77f479db9b-7zsd9" podStartSLOduration=1.874669526 podStartE2EDuration="5.94579107s" podCreationTimestamp="2026-04-23 17:55:35 +0000 UTC" firstStartedPulling="2026-04-23 17:55:35.884491218 +0000 UTC m=+195.179769794" lastFinishedPulling="2026-04-23 17:55:39.955612749 +0000 UTC m=+199.250891338" observedRunningTime="2026-04-23 17:55:40.94364964 +0000 UTC m=+200.238928237" watchObservedRunningTime="2026-04-23 17:55:40.94579107 +0000 UTC m=+200.241069667" Apr 23 17:55:40.972192 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:40.972131 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6567d97b5d-pgfhh" podStartSLOduration=2.54513705 podStartE2EDuration="5.972117244s" podCreationTimestamp="2026-04-23 17:55:35 +0000 UTC" firstStartedPulling="2026-04-23 17:55:36.516839975 +0000 UTC m=+195.812118555" lastFinishedPulling="2026-04-23 17:55:39.943820158 +0000 UTC m=+199.239098749" observedRunningTime="2026-04-23 17:55:40.971228795 +0000 UTC m=+200.266507419" watchObservedRunningTime="2026-04-23 17:55:40.972117244 +0000 UTC m=+200.267395841" Apr 23 17:55:41.911384 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:41.911340 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" event={"ID":"9d01bf3e-4061-4f32-a69a-11d933d7b9bc","Type":"ContainerStarted","Data":"5a554db1981187f76b57c687e4ee9d6e3e2b9e17cbd11ba89a21f317908f788f"} Apr 23 17:55:41.911384 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:41.911386 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" event={"ID":"9d01bf3e-4061-4f32-a69a-11d933d7b9bc","Type":"ContainerStarted","Data":"5b3993a48064491f93da87459f3b09cf6ca4f9fe61601e605a626557f0b63f5a"} Apr 23 17:55:41.935765 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:41.935706 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5676c8c784-z9wc5" podStartSLOduration=2.8242478 podStartE2EDuration="3.935686514s" podCreationTimestamp="2026-04-23 17:55:38 +0000 UTC" firstStartedPulling="2026-04-23 17:55:40.113504344 +0000 UTC m=+199.408782925" lastFinishedPulling="2026-04-23 17:55:41.224943051 +0000 UTC m=+200.520221639" observedRunningTime="2026-04-23 17:55:41.933081607 +0000 UTC m=+201.228360203" watchObservedRunningTime="2026-04-23 17:55:41.935686514 +0000 UTC m=+201.230965112" Apr 23 17:55:43.633282 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.632045 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j"] Apr 23 17:55:43.636747 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.636713 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.641984 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.641569 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-tls\"" Apr 23 17:55:43.641984 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.641578 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-kube-rbac-proxy-config\"" Apr 23 17:55:43.641984 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.641849 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-dockercfg-92qq9\"" Apr 23 17:55:43.656638 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.655091 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-5x2l6"] Apr 23 17:55:43.658862 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.658845 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.660428 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.660366 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j"] Apr 23 17:55:43.664285 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.664107 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-tls\"" Apr 23 17:55:43.665196 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.664728 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-dockercfg-mqh8z\"" Apr 23 17:55:43.665196 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.665011 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-custom-resource-state-configmap\"" Apr 23 17:55:43.665660 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.665435 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-kube-rbac-proxy-config\"" Apr 23 17:55:43.668264 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.668244 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-q6dlk"] Apr 23 17:55:43.678883 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.678861 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-5x2l6"] Apr 23 17:55:43.679272 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.679255 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.683204 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.683178 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-ctqbm\"" Apr 23 17:55:43.683420 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.683404 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 23 17:55:43.685920 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.685738 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 23 17:55:43.686337 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.686134 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 23 17:55:43.727716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.727017 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85vxx\" (UniqueName: \"kubernetes.io/projected/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-api-access-85vxx\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.727716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.727082 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.727716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.727126 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2590fc7-19e5-4364-9e78-dd69392e0609-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.727716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.727167 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.727716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.727230 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr95n\" (UniqueName: \"kubernetes.io/projected/a2590fc7-19e5-4364-9e78-dd69392e0609-kube-api-access-nr95n\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.727716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.727326 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a2590fc7-19e5-4364-9e78-dd69392e0609-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.727716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.727364 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d716c310-cac3-4f4a-9142-7e64ec9b5023-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.727716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.727395 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.727716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.727450 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a2590fc7-19e5-4364-9e78-dd69392e0609-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.727716 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.727487 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d716c310-cac3-4f4a-9142-7e64ec9b5023-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.828324 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828268 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0b66c287-b88d-4f3f-8d42-f4162338bc96-root\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.828493 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828374 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-accelerators-collector-config\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.828493 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828406 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a2590fc7-19e5-4364-9e78-dd69392e0609-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.828493 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828435 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-textfile\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.828493 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828473 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d716c310-cac3-4f4a-9142-7e64ec9b5023-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.828648 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828495 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-wtmp\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.828648 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828555 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-tls\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.828648 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828600 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.828648 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828625 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-85vxx\" (UniqueName: \"kubernetes.io/projected/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-api-access-85vxx\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.828802 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828679 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b66c287-b88d-4f3f-8d42-f4162338bc96-sys\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.828802 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828711 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.828802 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828743 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2590fc7-19e5-4364-9e78-dd69392e0609-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.828955 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828780 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b66c287-b88d-4f3f-8d42-f4162338bc96-metrics-client-ca\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.828955 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828835 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tc75\" (UniqueName: \"kubernetes.io/projected/0b66c287-b88d-4f3f-8d42-f4162338bc96-kube-api-access-2tc75\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.828955 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828866 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.828955 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828908 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nr95n\" (UniqueName: \"kubernetes.io/projected/a2590fc7-19e5-4364-9e78-dd69392e0609-kube-api-access-nr95n\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.828955 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828944 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a2590fc7-19e5-4364-9e78-dd69392e0609-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.829195 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828967 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d716c310-cac3-4f4a-9142-7e64ec9b5023-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.829195 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.828998 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.832225 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.831663 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d716c310-cac3-4f4a-9142-7e64ec9b5023-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.832225 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.832036 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d716c310-cac3-4f4a-9142-7e64ec9b5023-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.832225 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.832153 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a2590fc7-19e5-4364-9e78-dd69392e0609-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.838199 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.832647 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.841509 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.840211 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-85vxx\" (UniqueName: \"kubernetes.io/projected/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-api-access-85vxx\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.845139 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.845116 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a2590fc7-19e5-4364-9e78-dd69392e0609-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.845786 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.845747 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.847617 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.846822 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2590fc7-19e5-4364-9e78-dd69392e0609-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.847617 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.847194 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d716c310-cac3-4f4a-9142-7e64ec9b5023-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-5x2l6\" (UID: \"d716c310-cac3-4f4a-9142-7e64ec9b5023\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:43.847617 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.847581 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr95n\" (UniqueName: \"kubernetes.io/projected/a2590fc7-19e5-4364-9e78-dd69392e0609-kube-api-access-nr95n\") pod \"openshift-state-metrics-9d44df66c-8jw2j\" (UID: \"a2590fc7-19e5-4364-9e78-dd69392e0609\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.930216 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930128 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-wtmp\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.930216 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930184 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-tls\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.930460 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930234 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.930460 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930283 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b66c287-b88d-4f3f-8d42-f4162338bc96-sys\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.930460 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930354 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b66c287-b88d-4f3f-8d42-f4162338bc96-metrics-client-ca\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.930460 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930380 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2tc75\" (UniqueName: \"kubernetes.io/projected/0b66c287-b88d-4f3f-8d42-f4162338bc96-kube-api-access-2tc75\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.930460 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930377 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-wtmp\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.930460 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930449 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0b66c287-b88d-4f3f-8d42-f4162338bc96-root\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.930734 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930478 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-accelerators-collector-config\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.930734 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930510 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-textfile\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.930734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:43.930519 2566 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 23 17:55:43.930734 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:43.930584 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-tls podName:0b66c287-b88d-4f3f-8d42-f4162338bc96 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:44.430561705 +0000 UTC m=+203.725840284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-tls") pod "node-exporter-q6dlk" (UID: "0b66c287-b88d-4f3f-8d42-f4162338bc96") : secret "node-exporter-tls" not found Apr 23 17:55:43.930952 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.930835 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-textfile\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.931472 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.931422 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b66c287-b88d-4f3f-8d42-f4162338bc96-sys\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.931472 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.931430 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b66c287-b88d-4f3f-8d42-f4162338bc96-metrics-client-ca\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.931663 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.931546 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0b66c287-b88d-4f3f-8d42-f4162338bc96-root\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.931663 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.931548 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-accelerators-collector-config\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.935862 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.935814 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.949816 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.949730 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" Apr 23 17:55:43.954644 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.954618 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tc75\" (UniqueName: \"kubernetes.io/projected/0b66c287-b88d-4f3f-8d42-f4162338bc96-kube-api-access-2tc75\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:43.989753 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:43.987821 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" Apr 23 17:55:44.133521 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.133485 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j"] Apr 23 17:55:44.137486 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:44.137431 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2590fc7_19e5_4364_9e78_dd69392e0609.slice/crio-099b35768b420d0d7477b915c29591c145d7265757793cfe5910e9033a6badb3 WatchSource:0}: Error finding container 099b35768b420d0d7477b915c29591c145d7265757793cfe5910e9033a6badb3: Status 404 returned error can't find the container with id 099b35768b420d0d7477b915c29591c145d7265757793cfe5910e9033a6badb3 Apr 23 17:55:44.166924 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.166896 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-5x2l6"] Apr 23 17:55:44.171437 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:44.171405 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd716c310_cac3_4f4a_9142_7e64ec9b5023.slice/crio-816c8d4e8c5bea27d93ece01089f16108cef3d6db22536ee3cbdcb2c5b814ca4 WatchSource:0}: Error finding container 816c8d4e8c5bea27d93ece01089f16108cef3d6db22536ee3cbdcb2c5b814ca4: Status 404 returned error can't find the container with id 816c8d4e8c5bea27d93ece01089f16108cef3d6db22536ee3cbdcb2c5b814ca4 Apr 23 17:55:44.436294 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.436193 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-tls\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:44.439706 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.439676 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0b66c287-b88d-4f3f-8d42-f4162338bc96-node-exporter-tls\") pod \"node-exporter-q6dlk\" (UID: \"0b66c287-b88d-4f3f-8d42-f4162338bc96\") " pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:44.599293 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.599251 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-q6dlk" Apr 23 17:55:44.611969 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:44.611793 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b66c287_b88d_4f3f_8d42_f4162338bc96.slice/crio-ee75163be6ca343f05316c8bb7242a9ae605285b96ce67f22642c1b31b30a84d WatchSource:0}: Error finding container ee75163be6ca343f05316c8bb7242a9ae605285b96ce67f22642c1b31b30a84d: Status 404 returned error can't find the container with id ee75163be6ca343f05316c8bb7242a9ae605285b96ce67f22642c1b31b30a84d Apr 23 17:55:44.837333 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.836708 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:55:44.852265 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.848298 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.857191 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.857160 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 23 17:55:44.860637 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.859646 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 23 17:55:44.860637 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.859859 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 23 17:55:44.860637 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.859879 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 23 17:55:44.860637 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.860117 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 23 17:55:44.860637 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.860200 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 23 17:55:44.860637 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.860323 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 23 17:55:44.860637 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.860404 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 23 17:55:44.860637 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.860496 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 23 17:55:44.863072 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.863031 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:55:44.863466 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.863274 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-4lbfq\"" Apr 23 17:55:44.933470 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.933361 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" event={"ID":"a2590fc7-19e5-4364-9e78-dd69392e0609","Type":"ContainerStarted","Data":"a06b0122176a2b6a66d781a21d26a782a3214c8bde7b466158486c8425342839"} Apr 23 17:55:44.933470 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.933403 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" event={"ID":"a2590fc7-19e5-4364-9e78-dd69392e0609","Type":"ContainerStarted","Data":"2de014e16c3846b57565c2aac7df9eebed8288230df94709538c00d66b3af4ea"} Apr 23 17:55:44.933470 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.933418 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" event={"ID":"a2590fc7-19e5-4364-9e78-dd69392e0609","Type":"ContainerStarted","Data":"099b35768b420d0d7477b915c29591c145d7265757793cfe5910e9033a6badb3"} Apr 23 17:55:44.935172 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.935107 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" event={"ID":"d716c310-cac3-4f4a-9142-7e64ec9b5023","Type":"ContainerStarted","Data":"816c8d4e8c5bea27d93ece01089f16108cef3d6db22536ee3cbdcb2c5b814ca4"} Apr 23 17:55:44.938669 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.938607 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-q6dlk" event={"ID":"0b66c287-b88d-4f3f-8d42-f4162338bc96","Type":"ContainerStarted","Data":"ee75163be6ca343f05316c8bb7242a9ae605285b96ce67f22642c1b31b30a84d"} Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941232 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941286 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941339 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-volume\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941370 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941401 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941430 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941465 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941495 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941535 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941579 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-out\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941605 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941649 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xctfb\" (UniqueName: \"kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-kube-api-access-xctfb\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:44.941863 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:44.941685 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-web-config\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043024 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043083 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043114 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043156 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043195 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-out\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043220 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043259 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xctfb\" (UniqueName: \"kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-kube-api-access-xctfb\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043293 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-web-config\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043336 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043380 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043420 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-volume\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043454 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.043485 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:45.043675 2566 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Apr 23 17:55:45.043793 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:45.043744 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls podName:6744ec59-7a70-40e6-a9a1-f8baa8d972a2 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:45.543721292 +0000 UTC m=+204.838999869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2") : secret "alertmanager-main-tls" not found Apr 23 17:55:45.044874 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.044434 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.044935 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.044878 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.046769 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.046696 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.053156 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.053070 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-web-config\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.055806 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.055781 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.056222 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.056178 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-volume\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.056783 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.056743 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.057103 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.057054 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-out\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.057630 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.057588 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.058029 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.057992 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.058531 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.058492 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.061439 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.061383 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xctfb\" (UniqueName: \"kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-kube-api-access-xctfb\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.549373 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.548777 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:45.549373 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:45.548944 2566 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Apr 23 17:55:45.549373 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:55:45.549020 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls podName:6744ec59-7a70-40e6-a9a1-f8baa8d972a2 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:46.549001184 +0000 UTC m=+205.844279765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2") : secret "alertmanager-main-tls" not found Apr 23 17:55:45.721162 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.721125 2566 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:45.721894 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.721865 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:45.727329 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.727280 2566 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:45.746118 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.746085 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-59ffcb8856-jbbq9"] Apr 23 17:55:45.754496 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.754452 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.762834 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.762809 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-rules\"" Apr 23 17:55:45.764001 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.763383 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-web\"" Apr 23 17:55:45.764001 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.763594 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-metrics\"" Apr 23 17:55:45.764172 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.764154 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy\"" Apr 23 17:55:45.764386 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.764371 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-tls\"" Apr 23 17:55:45.764659 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.764408 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-dockercfg-2j6qf\"" Apr 23 17:55:45.765355 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.765289 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-grpc-tls-f8evpbsqpireu\"" Apr 23 17:55:45.769294 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.769274 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-59ffcb8856-jbbq9"] Apr 23 17:55:45.855111 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.855031 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c90f8891-3148-4c39-8562-85ceb05c9358-metrics-client-ca\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.855578 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.855118 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-tls\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.855578 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.855328 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.855578 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.855367 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-grpc-tls\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.855578 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.855417 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.855578 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.855464 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.855578 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.855529 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.855578 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.855569 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6qtk\" (UniqueName: \"kubernetes.io/projected/c90f8891-3148-4c39-8562-85ceb05c9358-kube-api-access-t6qtk\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.947530 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.947496 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:55:45.957726 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.956917 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.957726 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.956962 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-grpc-tls\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.957726 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.956999 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.957726 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.957047 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.957726 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.957080 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.957726 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.957106 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t6qtk\" (UniqueName: \"kubernetes.io/projected/c90f8891-3148-4c39-8562-85ceb05c9358-kube-api-access-t6qtk\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.957726 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.957176 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c90f8891-3148-4c39-8562-85ceb05c9358-metrics-client-ca\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.957726 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.957205 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-tls\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.960096 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.960037 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c90f8891-3148-4c39-8562-85ceb05c9358-metrics-client-ca\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.962634 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.962603 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.962860 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.962811 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-tls\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.963514 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.963448 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-grpc-tls\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.963718 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.963691 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.964994 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.964972 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.966122 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.966094 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c90f8891-3148-4c39-8562-85ceb05c9358-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:45.975785 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:45.975756 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6qtk\" (UniqueName: \"kubernetes.io/projected/c90f8891-3148-4c39-8562-85ceb05c9358-kube-api-access-t6qtk\") pod \"thanos-querier-59ffcb8856-jbbq9\" (UID: \"c90f8891-3148-4c39-8562-85ceb05c9358\") " pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:46.068246 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.068211 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:46.285509 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.285467 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-59ffcb8856-jbbq9"] Apr 23 17:55:46.292912 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:46.292772 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc90f8891_3148_4c39_8562_85ceb05c9358.slice/crio-7dd237cc411012425fb850959e737d88da2d8ce8a01ce47b33c09f22c94e63ff WatchSource:0}: Error finding container 7dd237cc411012425fb850959e737d88da2d8ce8a01ce47b33c09f22c94e63ff: Status 404 returned error can't find the container with id 7dd237cc411012425fb850959e737d88da2d8ce8a01ce47b33c09f22c94e63ff Apr 23 17:55:46.326082 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.325182 2566 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:46.326445 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.326425 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:46.335054 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.334825 2566 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:46.564360 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.564295 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:46.567359 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.567327 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:46.687205 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.687119 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:55:46.849811 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.849788 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:55:46.954082 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.953988 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" event={"ID":"a2590fc7-19e5-4364-9e78-dd69392e0609","Type":"ContainerStarted","Data":"60becaccab75fe56837adb0d423f9350af2239bd338789eb8616482a43e692d3"} Apr 23 17:55:46.956394 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.956365 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" event={"ID":"c90f8891-3148-4c39-8562-85ceb05c9358","Type":"ContainerStarted","Data":"7dd237cc411012425fb850959e737d88da2d8ce8a01ce47b33c09f22c94e63ff"} Apr 23 17:55:46.959172 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.959142 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" event={"ID":"d716c310-cac3-4f4a-9142-7e64ec9b5023","Type":"ContainerStarted","Data":"80e38767cbf190a065754bf2d32294c79bc9f61d0f34d35238ae9672df1f8c79"} Apr 23 17:55:46.959274 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.959178 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" event={"ID":"d716c310-cac3-4f4a-9142-7e64ec9b5023","Type":"ContainerStarted","Data":"5dfd2744f1547a752418f9c56871bc01ea5c777175124f7f0de60d2c557f1ef4"} Apr 23 17:55:46.959274 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.959193 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" event={"ID":"d716c310-cac3-4f4a-9142-7e64ec9b5023","Type":"ContainerStarted","Data":"7a390afc1f63c069cc26843495c77ea3b6373821cc23d2a60ed739553adae749"} Apr 23 17:55:46.962549 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.962521 2566 generic.go:358] "Generic (PLEG): container finished" podID="0b66c287-b88d-4f3f-8d42-f4162338bc96" containerID="15a3a7eaff46602bf9f81bda817c020914d7c5c2014109d7acc5950cda035cd9" exitCode=0 Apr 23 17:55:46.962753 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.962703 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-q6dlk" event={"ID":"0b66c287-b88d-4f3f-8d42-f4162338bc96","Type":"ContainerDied","Data":"15a3a7eaff46602bf9f81bda817c020914d7c5c2014109d7acc5950cda035cd9"} Apr 23 17:55:46.969270 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.969247 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:55:46.978969 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:46.978428 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-8jw2j" podStartSLOduration=2.156795186 podStartE2EDuration="3.978412611s" podCreationTimestamp="2026-04-23 17:55:43 +0000 UTC" firstStartedPulling="2026-04-23 17:55:44.291220422 +0000 UTC m=+203.586498996" lastFinishedPulling="2026-04-23 17:55:46.112837839 +0000 UTC m=+205.408116421" observedRunningTime="2026-04-23 17:55:46.975696668 +0000 UTC m=+206.270975270" watchObservedRunningTime="2026-04-23 17:55:46.978412611 +0000 UTC m=+206.273691209" Apr 23 17:55:47.021185 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:47.021125 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-69db897b98-5x2l6" podStartSLOduration=2.08654077 podStartE2EDuration="4.02110736s" podCreationTimestamp="2026-04-23 17:55:43 +0000 UTC" firstStartedPulling="2026-04-23 17:55:44.178190529 +0000 UTC m=+203.473469112" lastFinishedPulling="2026-04-23 17:55:46.112757113 +0000 UTC m=+205.408035702" observedRunningTime="2026-04-23 17:55:47.018898746 +0000 UTC m=+206.314177345" watchObservedRunningTime="2026-04-23 17:55:47.02110736 +0000 UTC m=+206.316385959" Apr 23 17:55:47.046036 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:47.045983 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-77f479db9b-7zsd9"] Apr 23 17:55:51.786469 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:51.786433 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-jd2kh" Apr 23 17:55:54.223288 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:55:54.223251 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6744ec59_7a70_40e6_a9a1_f8baa8d972a2.slice/crio-72d91ddf66aae56cef722baa491e4fb7a0d3307674eb202369b1a79a80a207bb WatchSource:0}: Error finding container 72d91ddf66aae56cef722baa491e4fb7a0d3307674eb202369b1a79a80a207bb: Status 404 returned error can't find the container with id 72d91ddf66aae56cef722baa491e4fb7a0d3307674eb202369b1a79a80a207bb Apr 23 17:55:54.996071 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:54.996026 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerStarted","Data":"72d91ddf66aae56cef722baa491e4fb7a0d3307674eb202369b1a79a80a207bb"} Apr 23 17:55:56.005636 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.005547 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" event={"ID":"c90f8891-3148-4c39-8562-85ceb05c9358","Type":"ContainerStarted","Data":"41d04a20db068dfee372e4f4ebc05a5bfca3ddaa479599647fb1db45145f7a90"} Apr 23 17:55:56.005636 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.005599 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" event={"ID":"c90f8891-3148-4c39-8562-85ceb05c9358","Type":"ContainerStarted","Data":"e151a5491873b2d095f31dda5d2c89c70d26fb4df6e269906fa1e56cc9ccd718"} Apr 23 17:55:56.005636 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.005614 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" event={"ID":"c90f8891-3148-4c39-8562-85ceb05c9358","Type":"ContainerStarted","Data":"da8195e9a812245e0bd18a4352828c0ffe62365236efd6c225aba99d2959ee1c"} Apr 23 17:55:56.007957 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.007922 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-q6dlk" event={"ID":"0b66c287-b88d-4f3f-8d42-f4162338bc96","Type":"ContainerStarted","Data":"cd55040963484adf66e20f26811d1bed3ebec943f7d0984d78050b970d48403a"} Apr 23 17:55:56.008078 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.007964 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-q6dlk" event={"ID":"0b66c287-b88d-4f3f-8d42-f4162338bc96","Type":"ContainerStarted","Data":"4cb52e60b41995c55a9dcfd055c25d32020eccc748f68513441e42dfd10f7f6d"} Apr 23 17:55:56.010408 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.009909 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6bcc868b7-8dvlx" event={"ID":"f54a175e-d59b-46e9-b245-82f3b11123d9","Type":"ContainerStarted","Data":"f790088be6b63a00a531b4979a7e3167b5825e64ca16e58c1449dbfc4724da6d"} Apr 23 17:55:56.010408 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.010204 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-6bcc868b7-8dvlx" Apr 23 17:55:56.011610 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.011542 2566 generic.go:358] "Generic (PLEG): container finished" podID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerID="f5bafaa50293fecc8100171d231c9e24431cff88620976b55cfcb1fa0ece64b5" exitCode=0 Apr 23 17:55:56.011610 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.011584 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerDied","Data":"f5bafaa50293fecc8100171d231c9e24431cff88620976b55cfcb1fa0ece64b5"} Apr 23 17:55:56.032497 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.032454 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-6bcc868b7-8dvlx" Apr 23 17:55:56.044240 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.044171 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-q6dlk" podStartSLOduration=11.543700478 podStartE2EDuration="13.044150447s" podCreationTimestamp="2026-04-23 17:55:43 +0000 UTC" firstStartedPulling="2026-04-23 17:55:44.614389317 +0000 UTC m=+203.909667893" lastFinishedPulling="2026-04-23 17:55:46.114839283 +0000 UTC m=+205.410117862" observedRunningTime="2026-04-23 17:55:56.04129581 +0000 UTC m=+215.336574412" watchObservedRunningTime="2026-04-23 17:55:56.044150447 +0000 UTC m=+215.339429046" Apr 23 17:55:56.070580 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:56.070518 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-6bcc868b7-8dvlx" podStartSLOduration=1.676643119 podStartE2EDuration="21.07050016s" podCreationTimestamp="2026-04-23 17:55:35 +0000 UTC" firstStartedPulling="2026-04-23 17:55:35.870872678 +0000 UTC m=+195.166151253" lastFinishedPulling="2026-04-23 17:55:55.264729716 +0000 UTC m=+214.560008294" observedRunningTime="2026-04-23 17:55:56.066879563 +0000 UTC m=+215.362158161" watchObservedRunningTime="2026-04-23 17:55:56.07050016 +0000 UTC m=+215.365778758" Apr 23 17:55:59.029159 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.029120 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerStarted","Data":"95b4b742ed744ffc51d99d5190e200bdbffa18e8882ada15e20a6ccb29f31c23"} Apr 23 17:55:59.029159 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.029160 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerStarted","Data":"ef94261cd9b1f4659c5bbaf957f9e3c50786be7ae62bac2c48e792c649a76f5a"} Apr 23 17:55:59.029648 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.029177 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerStarted","Data":"1a621f25e71d89b4830d2ecab85465470298ed75662b600d2a3a9c3b301dddcc"} Apr 23 17:55:59.029648 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.029192 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerStarted","Data":"ab3eeac3788aa033c3e452dfb87241066ea25c61db2254cc73a30c43ef057247"} Apr 23 17:55:59.029648 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.029206 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerStarted","Data":"a77e0f0614523666140e0d95aa02dfe5b46404122ce480014a1b66ad3fe582b6"} Apr 23 17:55:59.029648 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.029217 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerStarted","Data":"797f023a512a77f898f2b66624f1931b8a5e08bb0e49827ab5515305987fc76c"} Apr 23 17:55:59.032508 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.032476 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" event={"ID":"c90f8891-3148-4c39-8562-85ceb05c9358","Type":"ContainerStarted","Data":"6deecd6052ab64b0a3239b1f7f04c6d21a67f4ca2be4a34b0943af60fc3b7afd"} Apr 23 17:55:59.032638 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.032516 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" event={"ID":"c90f8891-3148-4c39-8562-85ceb05c9358","Type":"ContainerStarted","Data":"04e0dcee1d0dc90f55289147a1ea76e90cbe26a0ee1f6665d8fa4222ff89a0a3"} Apr 23 17:55:59.032638 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.032529 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" event={"ID":"c90f8891-3148-4c39-8562-85ceb05c9358","Type":"ContainerStarted","Data":"a2ff7255faff4a0b8132799c23688b5d22b37d0c574898bb9567a6d10ec14beb"} Apr 23 17:55:59.032746 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.032698 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:55:59.060395 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.060335 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=11.330740539 podStartE2EDuration="15.060298391s" podCreationTimestamp="2026-04-23 17:55:44 +0000 UTC" firstStartedPulling="2026-04-23 17:55:54.225166421 +0000 UTC m=+213.520444996" lastFinishedPulling="2026-04-23 17:55:57.954724272 +0000 UTC m=+217.250002848" observedRunningTime="2026-04-23 17:55:59.056254853 +0000 UTC m=+218.351533451" watchObservedRunningTime="2026-04-23 17:55:59.060298391 +0000 UTC m=+218.355576992" Apr 23 17:55:59.100190 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:55:59.100126 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" podStartSLOduration=2.444195056 podStartE2EDuration="14.100096974s" podCreationTimestamp="2026-04-23 17:55:45 +0000 UTC" firstStartedPulling="2026-04-23 17:55:46.294853472 +0000 UTC m=+205.590132047" lastFinishedPulling="2026-04-23 17:55:57.950755375 +0000 UTC m=+217.246033965" observedRunningTime="2026-04-23 17:55:59.095908743 +0000 UTC m=+218.391187340" watchObservedRunningTime="2026-04-23 17:55:59.100096974 +0000 UTC m=+218.395375572" Apr 23 17:56:00.042702 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:00.042676 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-59ffcb8856-jbbq9" Apr 23 17:56:07.050664 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:07.050628 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6567d97b5d-pgfhh"] Apr 23 17:56:09.410420 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:09.410382 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-7fb885f848-mqdhm"] Apr 23 17:56:13.993242 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:13.993171 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-77f479db9b-7zsd9" podUID="aa056e98-492b-4b91-86a6-f5ab60987ce5" containerName="console" containerID="cri-o://e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2" gracePeriod=15 Apr 23 17:56:14.269229 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.269202 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-77f479db9b-7zsd9_aa056e98-492b-4b91-86a6-f5ab60987ce5/console/0.log" Apr 23 17:56:14.269367 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.269275 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:56:14.331038 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.331005 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-oauth-config\") pod \"aa056e98-492b-4b91-86a6-f5ab60987ce5\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " Apr 23 17:56:14.331196 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.331055 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-config\") pod \"aa056e98-492b-4b91-86a6-f5ab60987ce5\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " Apr 23 17:56:14.331196 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.331110 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-service-ca\") pod \"aa056e98-492b-4b91-86a6-f5ab60987ce5\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " Apr 23 17:56:14.331196 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.331131 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-serving-cert\") pod \"aa056e98-492b-4b91-86a6-f5ab60987ce5\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " Apr 23 17:56:14.331196 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.331160 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7gkf\" (UniqueName: \"kubernetes.io/projected/aa056e98-492b-4b91-86a6-f5ab60987ce5-kube-api-access-w7gkf\") pod \"aa056e98-492b-4b91-86a6-f5ab60987ce5\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " Apr 23 17:56:14.331444 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.331221 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-oauth-serving-cert\") pod \"aa056e98-492b-4b91-86a6-f5ab60987ce5\" (UID: \"aa056e98-492b-4b91-86a6-f5ab60987ce5\") " Apr 23 17:56:14.331643 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.331613 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-service-ca" (OuterVolumeSpecName: "service-ca") pod "aa056e98-492b-4b91-86a6-f5ab60987ce5" (UID: "aa056e98-492b-4b91-86a6-f5ab60987ce5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:14.331643 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.331633 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-config" (OuterVolumeSpecName: "console-config") pod "aa056e98-492b-4b91-86a6-f5ab60987ce5" (UID: "aa056e98-492b-4b91-86a6-f5ab60987ce5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:14.331801 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.331742 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "aa056e98-492b-4b91-86a6-f5ab60987ce5" (UID: "aa056e98-492b-4b91-86a6-f5ab60987ce5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:14.333601 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.333582 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "aa056e98-492b-4b91-86a6-f5ab60987ce5" (UID: "aa056e98-492b-4b91-86a6-f5ab60987ce5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:14.333874 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.333844 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "aa056e98-492b-4b91-86a6-f5ab60987ce5" (UID: "aa056e98-492b-4b91-86a6-f5ab60987ce5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:14.333874 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.333849 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa056e98-492b-4b91-86a6-f5ab60987ce5-kube-api-access-w7gkf" (OuterVolumeSpecName: "kube-api-access-w7gkf") pod "aa056e98-492b-4b91-86a6-f5ab60987ce5" (UID: "aa056e98-492b-4b91-86a6-f5ab60987ce5"). InnerVolumeSpecName "kube-api-access-w7gkf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:14.432164 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.432128 2566 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-oauth-serving-cert\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:14.432164 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.432158 2566 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-oauth-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:14.432164 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.432169 2566 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:14.432164 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.432177 2566 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa056e98-492b-4b91-86a6-f5ab60987ce5-service-ca\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:14.432475 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.432186 2566 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa056e98-492b-4b91-86a6-f5ab60987ce5-console-serving-cert\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:14.432475 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:14.432196 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w7gkf\" (UniqueName: \"kubernetes.io/projected/aa056e98-492b-4b91-86a6-f5ab60987ce5-kube-api-access-w7gkf\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:15.091108 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.091071 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-77f479db9b-7zsd9_aa056e98-492b-4b91-86a6-f5ab60987ce5/console/0.log" Apr 23 17:56:15.091517 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.091116 2566 generic.go:358] "Generic (PLEG): container finished" podID="aa056e98-492b-4b91-86a6-f5ab60987ce5" containerID="e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2" exitCode=2 Apr 23 17:56:15.091517 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.091212 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77f479db9b-7zsd9" event={"ID":"aa056e98-492b-4b91-86a6-f5ab60987ce5","Type":"ContainerDied","Data":"e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2"} Apr 23 17:56:15.091517 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.091240 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77f479db9b-7zsd9" Apr 23 17:56:15.091517 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.091262 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77f479db9b-7zsd9" event={"ID":"aa056e98-492b-4b91-86a6-f5ab60987ce5","Type":"ContainerDied","Data":"9d3666f7dc5f61d09bb10bbbd14005f1b32a09423cbb98e1e76a06e1d825ae68"} Apr 23 17:56:15.091517 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.091282 2566 scope.go:117] "RemoveContainer" containerID="e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2" Apr 23 17:56:15.105780 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.105759 2566 scope.go:117] "RemoveContainer" containerID="e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2" Apr 23 17:56:15.106068 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:56:15.106036 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2\": container with ID starting with e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2 not found: ID does not exist" containerID="e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2" Apr 23 17:56:15.106125 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.106078 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2"} err="failed to get container status \"e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2\": rpc error: code = NotFound desc = could not find container \"e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2\": container with ID starting with e4faf03d458daa408d624063fbb3c7a29f01351ac68547569c116f15d76a82c2 not found: ID does not exist" Apr 23 17:56:15.112161 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.112107 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-77f479db9b-7zsd9"] Apr 23 17:56:15.116993 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.116969 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-77f479db9b-7zsd9"] Apr 23 17:56:15.254831 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:15.254799 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa056e98-492b-4b91-86a6-f5ab60987ce5" path="/var/lib/kubelet/pods/aa056e98-492b-4b91-86a6-f5ab60987ce5/volumes" Apr 23 17:56:20.111563 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:20.111522 2566 generic.go:358] "Generic (PLEG): container finished" podID="df076eb4-c3f3-4cbf-8cee-a735d1572b5b" containerID="f2d88999a45ceecaac5ec77426c1944753bd6a82c04d79ac9eb53f1bd08d389c" exitCode=0 Apr 23 17:56:20.111976 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:20.111571 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" event={"ID":"df076eb4-c3f3-4cbf-8cee-a735d1572b5b","Type":"ContainerDied","Data":"f2d88999a45ceecaac5ec77426c1944753bd6a82c04d79ac9eb53f1bd08d389c"} Apr 23 17:56:20.111976 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:20.111881 2566 scope.go:117] "RemoveContainer" containerID="f2d88999a45ceecaac5ec77426c1944753bd6a82c04d79ac9eb53f1bd08d389c" Apr 23 17:56:21.116574 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:21.116541 2566 generic.go:358] "Generic (PLEG): container finished" podID="dd76c0f6-b46d-43a0-a71f-55a695fd6d99" containerID="1b8a9ffb490eed7546c6767f6d70b1770ac1ef0f33c3ba11dd66c5a066a22c23" exitCode=0 Apr 23 17:56:21.116989 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:21.116615 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-kfcjl" event={"ID":"dd76c0f6-b46d-43a0-a71f-55a695fd6d99","Type":"ContainerDied","Data":"1b8a9ffb490eed7546c6767f6d70b1770ac1ef0f33c3ba11dd66c5a066a22c23"} Apr 23 17:56:21.116989 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:21.116964 2566 scope.go:117] "RemoveContainer" containerID="1b8a9ffb490eed7546c6767f6d70b1770ac1ef0f33c3ba11dd66c5a066a22c23" Apr 23 17:56:21.118432 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:21.118403 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-bfm5m" event={"ID":"df076eb4-c3f3-4cbf-8cee-a735d1572b5b","Type":"ContainerStarted","Data":"3ac4d83a737ea955f80470631f737bd1f083dcdb9113b1bb7e2edca6ad1b7219"} Apr 23 17:56:22.123845 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:22.123809 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-kfcjl" event={"ID":"dd76c0f6-b46d-43a0-a71f-55a695fd6d99","Type":"ContainerStarted","Data":"021e0e690048649e73e09972767674e4e474cb2ef34f9865cd01c1a927b256cc"} Apr 23 17:56:31.154786 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:31.154753 2566 generic.go:358] "Generic (PLEG): container finished" podID="36169332-5c35-4e99-b318-65e24dfcc370" containerID="a8fff8d9a157aff008b05e4a2f19229c46bc41bb13f65e45e8db067667bb2bac" exitCode=0 Apr 23 17:56:31.155192 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:31.154828 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" event={"ID":"36169332-5c35-4e99-b318-65e24dfcc370","Type":"ContainerDied","Data":"a8fff8d9a157aff008b05e4a2f19229c46bc41bb13f65e45e8db067667bb2bac"} Apr 23 17:56:31.155192 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:31.155114 2566 scope.go:117] "RemoveContainer" containerID="a8fff8d9a157aff008b05e4a2f19229c46bc41bb13f65e45e8db067667bb2bac" Apr 23 17:56:32.076450 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.076347 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-6567d97b5d-pgfhh" podUID="1a661fb3-1486-4d61-8791-258fdf538a89" containerName="console" containerID="cri-o://ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176" gracePeriod=15 Apr 23 17:56:32.160182 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.160150 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-fqsnm" event={"ID":"36169332-5c35-4e99-b318-65e24dfcc370","Type":"ContainerStarted","Data":"41f5ff2d0099086bd51d2b674497697f1b48aa4302fb17ee0c02b9d9ee36a785"} Apr 23 17:56:32.352709 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.352682 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6567d97b5d-pgfhh_1a661fb3-1486-4d61-8791-258fdf538a89/console/0.log" Apr 23 17:56:32.352838 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.352743 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:56:32.496349 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.496321 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-oauth-serving-cert\") pod \"1a661fb3-1486-4d61-8791-258fdf538a89\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " Apr 23 17:56:32.496523 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.496386 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-oauth-config\") pod \"1a661fb3-1486-4d61-8791-258fdf538a89\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " Apr 23 17:56:32.496581 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.496556 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-trusted-ca-bundle\") pod \"1a661fb3-1486-4d61-8791-258fdf538a89\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " Apr 23 17:56:32.496627 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.496616 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhmn5\" (UniqueName: \"kubernetes.io/projected/1a661fb3-1486-4d61-8791-258fdf538a89-kube-api-access-vhmn5\") pod \"1a661fb3-1486-4d61-8791-258fdf538a89\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " Apr 23 17:56:32.496673 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.496621 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1a661fb3-1486-4d61-8791-258fdf538a89" (UID: "1a661fb3-1486-4d61-8791-258fdf538a89"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:32.496673 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.496652 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-console-config\") pod \"1a661fb3-1486-4d61-8791-258fdf538a89\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " Apr 23 17:56:32.496752 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.496714 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-service-ca\") pod \"1a661fb3-1486-4d61-8791-258fdf538a89\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " Apr 23 17:56:32.496804 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.496758 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-serving-cert\") pod \"1a661fb3-1486-4d61-8791-258fdf538a89\" (UID: \"1a661fb3-1486-4d61-8791-258fdf538a89\") " Apr 23 17:56:32.497396 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.496967 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-console-config" (OuterVolumeSpecName: "console-config") pod "1a661fb3-1486-4d61-8791-258fdf538a89" (UID: "1a661fb3-1486-4d61-8791-258fdf538a89"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:32.497396 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.496998 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1a661fb3-1486-4d61-8791-258fdf538a89" (UID: "1a661fb3-1486-4d61-8791-258fdf538a89"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:32.497396 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.497050 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-service-ca" (OuterVolumeSpecName: "service-ca") pod "1a661fb3-1486-4d61-8791-258fdf538a89" (UID: "1a661fb3-1486-4d61-8791-258fdf538a89"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:32.497396 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.497103 2566 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-oauth-serving-cert\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:32.497396 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.497117 2566 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-trusted-ca-bundle\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:32.497396 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.497132 2566 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-console-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:32.499123 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.499094 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1a661fb3-1486-4d61-8791-258fdf538a89" (UID: "1a661fb3-1486-4d61-8791-258fdf538a89"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:32.499250 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.499221 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a661fb3-1486-4d61-8791-258fdf538a89-kube-api-access-vhmn5" (OuterVolumeSpecName: "kube-api-access-vhmn5") pod "1a661fb3-1486-4d61-8791-258fdf538a89" (UID: "1a661fb3-1486-4d61-8791-258fdf538a89"). InnerVolumeSpecName "kube-api-access-vhmn5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:32.499456 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.499432 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1a661fb3-1486-4d61-8791-258fdf538a89" (UID: "1a661fb3-1486-4d61-8791-258fdf538a89"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:32.598230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.598141 2566 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-oauth-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:32.598230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.598180 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vhmn5\" (UniqueName: \"kubernetes.io/projected/1a661fb3-1486-4d61-8791-258fdf538a89-kube-api-access-vhmn5\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:32.598230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.598196 2566 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a661fb3-1486-4d61-8791-258fdf538a89-service-ca\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:32.598230 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:32.598211 2566 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a661fb3-1486-4d61-8791-258fdf538a89-console-serving-cert\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:33.165003 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.164976 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6567d97b5d-pgfhh_1a661fb3-1486-4d61-8791-258fdf538a89/console/0.log" Apr 23 17:56:33.165445 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.165021 2566 generic.go:358] "Generic (PLEG): container finished" podID="1a661fb3-1486-4d61-8791-258fdf538a89" containerID="ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176" exitCode=2 Apr 23 17:56:33.165445 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.165091 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6567d97b5d-pgfhh" Apr 23 17:56:33.165445 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.165104 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6567d97b5d-pgfhh" event={"ID":"1a661fb3-1486-4d61-8791-258fdf538a89","Type":"ContainerDied","Data":"ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176"} Apr 23 17:56:33.165445 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.165142 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6567d97b5d-pgfhh" event={"ID":"1a661fb3-1486-4d61-8791-258fdf538a89","Type":"ContainerDied","Data":"bc33747dac759d651623bd426d0fcd8c4b65d14b008835848d7c544590ee645a"} Apr 23 17:56:33.165445 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.165157 2566 scope.go:117] "RemoveContainer" containerID="ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176" Apr 23 17:56:33.177250 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.177219 2566 scope.go:117] "RemoveContainer" containerID="ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176" Apr 23 17:56:33.177574 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:56:33.177545 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176\": container with ID starting with ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176 not found: ID does not exist" containerID="ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176" Apr 23 17:56:33.177696 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.177585 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176"} err="failed to get container status \"ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176\": rpc error: code = NotFound desc = could not find container \"ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176\": container with ID starting with ea2b2d93690546a960b04038432daa982378226b9393dd3d52642b883f56b176 not found: ID does not exist" Apr 23 17:56:33.192355 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.192324 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6567d97b5d-pgfhh"] Apr 23 17:56:33.195210 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.195187 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6567d97b5d-pgfhh"] Apr 23 17:56:33.254493 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:33.254463 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a661fb3-1486-4d61-8791-258fdf538a89" path="/var/lib/kubelet/pods/1a661fb3-1486-4d61-8791-258fdf538a89/volumes" Apr 23 17:56:34.431150 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:34.431106 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" podUID="a7cbc07c-c629-4c31-a456-4f9bf5b328f7" containerName="registry" containerID="cri-o://38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d" gracePeriod=30 Apr 23 17:56:35.704055 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.704023 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:56:35.826126 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.826089 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-installation-pull-secrets\") pod \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " Apr 23 17:56:35.826126 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.826131 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-certificates\") pod \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " Apr 23 17:56:35.826431 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.826152 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxxtw\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-kube-api-access-vxxtw\") pod \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " Apr 23 17:56:35.826431 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.826170 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls\") pod \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " Apr 23 17:56:35.826431 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.826208 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-bound-sa-token\") pod \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " Apr 23 17:56:35.826431 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.826247 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-trusted-ca\") pod \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " Apr 23 17:56:35.826431 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.826281 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-image-registry-private-configuration\") pod \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " Apr 23 17:56:35.826431 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.826349 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-ca-trust-extracted\") pod \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\" (UID: \"a7cbc07c-c629-4c31-a456-4f9bf5b328f7\") " Apr 23 17:56:35.826883 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.826840 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a7cbc07c-c629-4c31-a456-4f9bf5b328f7" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:35.827095 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.827065 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a7cbc07c-c629-4c31-a456-4f9bf5b328f7" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:35.829495 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.829467 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a7cbc07c-c629-4c31-a456-4f9bf5b328f7" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:35.829616 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.829537 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a7cbc07c-c629-4c31-a456-4f9bf5b328f7" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:35.829616 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.829558 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "a7cbc07c-c629-4c31-a456-4f9bf5b328f7" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:35.829706 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.829615 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-kube-api-access-vxxtw" (OuterVolumeSpecName: "kube-api-access-vxxtw") pod "a7cbc07c-c629-4c31-a456-4f9bf5b328f7" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7"). InnerVolumeSpecName "kube-api-access-vxxtw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:35.829706 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.829647 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a7cbc07c-c629-4c31-a456-4f9bf5b328f7" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:35.835467 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.835446 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a7cbc07c-c629-4c31-a456-4f9bf5b328f7" (UID: "a7cbc07c-c629-4c31-a456-4f9bf5b328f7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:56:35.927315 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.927272 2566 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-installation-pull-secrets\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:35.927489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.927341 2566 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-certificates\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:35.927489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.927357 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vxxtw\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-kube-api-access-vxxtw\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:35.927489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.927367 2566 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-registry-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:35.927489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.927385 2566 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-bound-sa-token\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:35.927489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.927393 2566 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-trusted-ca\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:35.927489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.927401 2566 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-image-registry-private-configuration\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:35.927489 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:35.927410 2566 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a7cbc07c-c629-4c31-a456-4f9bf5b328f7-ca-trust-extracted\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:56:36.181064 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:36.180974 2566 generic.go:358] "Generic (PLEG): container finished" podID="a7cbc07c-c629-4c31-a456-4f9bf5b328f7" containerID="38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d" exitCode=0 Apr 23 17:56:36.181064 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:36.181041 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" Apr 23 17:56:36.181285 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:36.181054 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" event={"ID":"a7cbc07c-c629-4c31-a456-4f9bf5b328f7","Type":"ContainerDied","Data":"38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d"} Apr 23 17:56:36.181285 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:36.181093 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7fb885f848-mqdhm" event={"ID":"a7cbc07c-c629-4c31-a456-4f9bf5b328f7","Type":"ContainerDied","Data":"68955ba76fbeaa1d2f4166d2fdceba463cd4713f9422964ef4b3393d641e09ba"} Apr 23 17:56:36.181285 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:36.181109 2566 scope.go:117] "RemoveContainer" containerID="38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d" Apr 23 17:56:36.190265 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:36.190251 2566 scope.go:117] "RemoveContainer" containerID="38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d" Apr 23 17:56:36.190522 ip-10-0-136-172 kubenswrapper[2566]: E0423 17:56:36.190506 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d\": container with ID starting with 38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d not found: ID does not exist" containerID="38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d" Apr 23 17:56:36.190594 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:36.190530 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d"} err="failed to get container status \"38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d\": rpc error: code = NotFound desc = could not find container \"38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d\": container with ID starting with 38fd9541a1bc44662e84140db5725901b3a81a9d6e590e772071f86ee652f41d not found: ID does not exist" Apr 23 17:56:36.204218 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:36.204191 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-7fb885f848-mqdhm"] Apr 23 17:56:36.211171 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:36.211147 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-7fb885f848-mqdhm"] Apr 23 17:56:37.253508 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:56:37.253476 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7cbc07c-c629-4c31-a456-4f9bf5b328f7" path="/var/lib/kubelet/pods/a7cbc07c-c629-4c31-a456-4f9bf5b328f7/volumes" Apr 23 17:57:04.169149 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:04.169113 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:57:04.169667 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:04.169542 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="alertmanager" containerID="cri-o://797f023a512a77f898f2b66624f1931b8a5e08bb0e49827ab5515305987fc76c" gracePeriod=120 Apr 23 17:57:04.169667 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:04.169556 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy-metric" containerID="cri-o://ef94261cd9b1f4659c5bbaf957f9e3c50786be7ae62bac2c48e792c649a76f5a" gracePeriod=120 Apr 23 17:57:04.169667 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:04.169609 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy" containerID="cri-o://1a621f25e71d89b4830d2ecab85465470298ed75662b600d2a3a9c3b301dddcc" gracePeriod=120 Apr 23 17:57:04.169667 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:04.169582 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="config-reloader" containerID="cri-o://a77e0f0614523666140e0d95aa02dfe5b46404122ce480014a1b66ad3fe582b6" gracePeriod=120 Apr 23 17:57:04.169864 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:04.169564 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy-web" containerID="cri-o://ab3eeac3788aa033c3e452dfb87241066ea25c61db2254cc73a30c43ef057247" gracePeriod=120 Apr 23 17:57:04.169864 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:04.169647 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="prom-label-proxy" containerID="cri-o://95b4b742ed744ffc51d99d5190e200bdbffa18e8882ada15e20a6ccb29f31c23" gracePeriod=120 Apr 23 17:57:05.288116 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288088 2566 generic.go:358] "Generic (PLEG): container finished" podID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerID="95b4b742ed744ffc51d99d5190e200bdbffa18e8882ada15e20a6ccb29f31c23" exitCode=0 Apr 23 17:57:05.288116 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288113 2566 generic.go:358] "Generic (PLEG): container finished" podID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerID="ef94261cd9b1f4659c5bbaf957f9e3c50786be7ae62bac2c48e792c649a76f5a" exitCode=0 Apr 23 17:57:05.288436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288120 2566 generic.go:358] "Generic (PLEG): container finished" podID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerID="1a621f25e71d89b4830d2ecab85465470298ed75662b600d2a3a9c3b301dddcc" exitCode=0 Apr 23 17:57:05.288436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288125 2566 generic.go:358] "Generic (PLEG): container finished" podID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerID="ab3eeac3788aa033c3e452dfb87241066ea25c61db2254cc73a30c43ef057247" exitCode=0 Apr 23 17:57:05.288436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288130 2566 generic.go:358] "Generic (PLEG): container finished" podID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerID="a77e0f0614523666140e0d95aa02dfe5b46404122ce480014a1b66ad3fe582b6" exitCode=0 Apr 23 17:57:05.288436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288135 2566 generic.go:358] "Generic (PLEG): container finished" podID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerID="797f023a512a77f898f2b66624f1931b8a5e08bb0e49827ab5515305987fc76c" exitCode=0 Apr 23 17:57:05.288436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288151 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerDied","Data":"95b4b742ed744ffc51d99d5190e200bdbffa18e8882ada15e20a6ccb29f31c23"} Apr 23 17:57:05.288436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288182 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerDied","Data":"ef94261cd9b1f4659c5bbaf957f9e3c50786be7ae62bac2c48e792c649a76f5a"} Apr 23 17:57:05.288436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288193 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerDied","Data":"1a621f25e71d89b4830d2ecab85465470298ed75662b600d2a3a9c3b301dddcc"} Apr 23 17:57:05.288436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288201 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerDied","Data":"ab3eeac3788aa033c3e452dfb87241066ea25c61db2254cc73a30c43ef057247"} Apr 23 17:57:05.288436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288209 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerDied","Data":"a77e0f0614523666140e0d95aa02dfe5b46404122ce480014a1b66ad3fe582b6"} Apr 23 17:57:05.288436 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.288226 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerDied","Data":"797f023a512a77f898f2b66624f1931b8a5e08bb0e49827ab5515305987fc76c"} Apr 23 17:57:05.412758 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.412736 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:05.481359 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481285 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481509 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481365 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-web\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481509 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481393 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-main-db\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481509 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481423 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481509 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481471 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-cluster-tls-config\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481509 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481499 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xctfb\" (UniqueName: \"kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-kube-api-access-xctfb\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481769 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481524 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-trusted-ca-bundle\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481769 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481547 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-volume\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481769 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481580 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-metrics-client-ca\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481769 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481619 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-out\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481769 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481664 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-tls-assets\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481769 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481696 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-web-config\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481769 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481739 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy\") pod \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\" (UID: \"6744ec59-7a70-40e6-a9a1-f8baa8d972a2\") " Apr 23 17:57:05.481769 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.481743 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:57:05.482222 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.482170 2566 reconciler_common.go:299] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-main-db\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.482287 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.482123 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:05.484565 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.484496 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:05.484722 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.484630 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:57:05.485200 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.484920 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:05.485434 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.485371 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:05.485898 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.485439 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:05.486058 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.486033 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-kube-api-access-xctfb" (OuterVolumeSpecName: "kube-api-access-xctfb") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "kube-api-access-xctfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:57:05.486121 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.486032 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:05.486870 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.486846 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-out" (OuterVolumeSpecName: "config-out") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:57:05.487572 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.487550 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-volume" (OuterVolumeSpecName: "config-volume") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:05.490728 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.490622 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-cluster-tls-config" (OuterVolumeSpecName: "cluster-tls-config") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "cluster-tls-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:05.497434 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.497414 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-web-config" (OuterVolumeSpecName: "web-config") pod "6744ec59-7a70-40e6-a9a1-f8baa8d972a2" (UID: "6744ec59-7a70-40e6-a9a1-f8baa8d972a2"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:05.583485 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583450 2566 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583485 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583480 2566 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-metric\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583485 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583490 2566 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-kube-rbac-proxy-web\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583722 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583500 2566 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-secret-alertmanager-main-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583722 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583511 2566 reconciler_common.go:299] "Volume detached for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-cluster-tls-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583722 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583520 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xctfb\" (UniqueName: \"kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-kube-api-access-xctfb\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583722 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583529 2566 reconciler_common.go:299] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-alertmanager-trusted-ca-bundle\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583722 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583538 2566 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-volume\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583722 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583546 2566 reconciler_common.go:299] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-metrics-client-ca\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583722 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583555 2566 reconciler_common.go:299] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-config-out\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583722 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583563 2566 reconciler_common.go:299] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-tls-assets\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:05.583722 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:05.583571 2566 reconciler_common.go:299] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6744ec59-7a70-40e6-a9a1-f8baa8d972a2-web-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 17:57:06.293878 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.293844 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6744ec59-7a70-40e6-a9a1-f8baa8d972a2","Type":"ContainerDied","Data":"72d91ddf66aae56cef722baa491e4fb7a0d3307674eb202369b1a79a80a207bb"} Apr 23 17:57:06.294255 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.293892 2566 scope.go:117] "RemoveContainer" containerID="95b4b742ed744ffc51d99d5190e200bdbffa18e8882ada15e20a6ccb29f31c23" Apr 23 17:57:06.294255 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.293899 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.303072 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.303057 2566 scope.go:117] "RemoveContainer" containerID="ef94261cd9b1f4659c5bbaf957f9e3c50786be7ae62bac2c48e792c649a76f5a" Apr 23 17:57:06.310413 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.310394 2566 scope.go:117] "RemoveContainer" containerID="1a621f25e71d89b4830d2ecab85465470298ed75662b600d2a3a9c3b301dddcc" Apr 23 17:57:06.318115 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.318070 2566 scope.go:117] "RemoveContainer" containerID="ab3eeac3788aa033c3e452dfb87241066ea25c61db2254cc73a30c43ef057247" Apr 23 17:57:06.318973 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.318916 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:57:06.324090 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.324052 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:57:06.326287 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.326268 2566 scope.go:117] "RemoveContainer" containerID="a77e0f0614523666140e0d95aa02dfe5b46404122ce480014a1b66ad3fe582b6" Apr 23 17:57:06.332965 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.332950 2566 scope.go:117] "RemoveContainer" containerID="797f023a512a77f898f2b66624f1931b8a5e08bb0e49827ab5515305987fc76c" Apr 23 17:57:06.339736 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.339719 2566 scope.go:117] "RemoveContainer" containerID="f5bafaa50293fecc8100171d231c9e24431cff88620976b55cfcb1fa0ece64b5" Apr 23 17:57:06.348765 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.348743 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:57:06.349069 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349057 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy" Apr 23 17:57:06.349069 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349070 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349081 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa056e98-492b-4b91-86a6-f5ab60987ce5" containerName="console" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349086 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa056e98-492b-4b91-86a6-f5ab60987ce5" containerName="console" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349094 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a661fb3-1486-4d61-8791-258fdf538a89" containerName="console" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349099 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a661fb3-1486-4d61-8791-258fdf538a89" containerName="console" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349107 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy-web" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349113 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy-web" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349122 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="alertmanager" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349126 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="alertmanager" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349132 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="prom-label-proxy" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349137 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="prom-label-proxy" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349144 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="config-reloader" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349148 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="config-reloader" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349159 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy-metric" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349163 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy-metric" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349171 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a7cbc07c-c629-4c31-a456-4f9bf5b328f7" containerName="registry" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349175 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cbc07c-c629-4c31-a456-4f9bf5b328f7" containerName="registry" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349181 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="init-config-reloader" Apr 23 17:57:06.349193 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349186 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="init-config-reloader" Apr 23 17:57:06.349823 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349233 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="aa056e98-492b-4b91-86a6-f5ab60987ce5" containerName="console" Apr 23 17:57:06.349823 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349242 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="prom-label-proxy" Apr 23 17:57:06.349823 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349249 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="a7cbc07c-c629-4c31-a456-4f9bf5b328f7" containerName="registry" Apr 23 17:57:06.349823 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349256 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy-web" Apr 23 17:57:06.349823 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349261 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="config-reloader" Apr 23 17:57:06.349823 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349269 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy-metric" Apr 23 17:57:06.349823 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349276 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="alertmanager" Apr 23 17:57:06.349823 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349281 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" containerName="kube-rbac-proxy" Apr 23 17:57:06.349823 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.349288 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a661fb3-1486-4d61-8791-258fdf538a89" containerName="console" Apr 23 17:57:06.353229 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.353213 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.354979 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.354963 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 23 17:57:06.355218 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.355204 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 23 17:57:06.355296 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.355262 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 23 17:57:06.355296 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.355286 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 23 17:57:06.355417 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.355289 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 23 17:57:06.355465 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.355448 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 23 17:57:06.355546 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.355529 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-4lbfq\"" Apr 23 17:57:06.355701 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.355682 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 23 17:57:06.355701 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.355695 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 23 17:57:06.360803 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.360786 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 23 17:57:06.364210 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.364180 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:57:06.389421 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389387 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389536 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389428 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-web-config\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389536 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389454 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fr5b\" (UniqueName: \"kubernetes.io/projected/ba35557c-7e83-4e76-966c-6bd98124864c-kube-api-access-2fr5b\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389610 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389529 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ba35557c-7e83-4e76-966c-6bd98124864c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389610 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389573 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-config-volume\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389610 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389598 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389706 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389628 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389706 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389646 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389706 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389664 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ba35557c-7e83-4e76-966c-6bd98124864c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389793 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389771 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389971 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389791 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ba35557c-7e83-4e76-966c-6bd98124864c-config-out\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389971 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389806 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ba35557c-7e83-4e76-966c-6bd98124864c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.389971 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.389823 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba35557c-7e83-4e76-966c-6bd98124864c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.490585 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490555 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-config-volume\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.490585 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490585 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.490790 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490726 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.490790 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490757 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.490922 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490789 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ba35557c-7e83-4e76-966c-6bd98124864c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.490922 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490843 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.490922 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490874 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ba35557c-7e83-4e76-966c-6bd98124864c-config-out\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.490922 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490902 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ba35557c-7e83-4e76-966c-6bd98124864c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.491132 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490931 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba35557c-7e83-4e76-966c-6bd98124864c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.491132 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490965 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.491132 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.490995 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-web-config\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.491132 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.491019 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2fr5b\" (UniqueName: \"kubernetes.io/projected/ba35557c-7e83-4e76-966c-6bd98124864c-kube-api-access-2fr5b\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.491132 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.491063 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ba35557c-7e83-4e76-966c-6bd98124864c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.491406 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.491256 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ba35557c-7e83-4e76-966c-6bd98124864c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.492592 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.492562 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ba35557c-7e83-4e76-966c-6bd98124864c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.492866 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.492806 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba35557c-7e83-4e76-966c-6bd98124864c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.494328 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.494218 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.494532 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.494512 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.494751 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.494727 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ba35557c-7e83-4e76-966c-6bd98124864c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.494851 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.494776 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.494918 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.494892 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ba35557c-7e83-4e76-966c-6bd98124864c-config-out\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.495048 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.495029 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.495545 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.495525 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-config-volume\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.495686 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.495667 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.495891 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.495874 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ba35557c-7e83-4e76-966c-6bd98124864c-web-config\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.501952 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.501931 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fr5b\" (UniqueName: \"kubernetes.io/projected/ba35557c-7e83-4e76-966c-6bd98124864c-kube-api-access-2fr5b\") pod \"alertmanager-main-0\" (UID: \"ba35557c-7e83-4e76-966c-6bd98124864c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.662684 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.662645 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 23 17:57:06.792727 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:06.792701 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 23 17:57:06.795219 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:57:06.795187 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba35557c_7e83_4e76_966c_6bd98124864c.slice/crio-fa16ff01a4568869a90824a9aa2507c9424884363c5cc8e7bfb7e3f65c8cfa1e WatchSource:0}: Error finding container fa16ff01a4568869a90824a9aa2507c9424884363c5cc8e7bfb7e3f65c8cfa1e: Status 404 returned error can't find the container with id fa16ff01a4568869a90824a9aa2507c9424884363c5cc8e7bfb7e3f65c8cfa1e Apr 23 17:57:07.260390 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:07.260291 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6744ec59-7a70-40e6-a9a1-f8baa8d972a2" path="/var/lib/kubelet/pods/6744ec59-7a70-40e6-a9a1-f8baa8d972a2/volumes" Apr 23 17:57:07.298855 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:07.298822 2566 generic.go:358] "Generic (PLEG): container finished" podID="ba35557c-7e83-4e76-966c-6bd98124864c" containerID="845c18a4ac7773f8c1d3123622439748cb9906f34ed955f0dbd4efb5b6ec38a9" exitCode=0 Apr 23 17:57:07.299258 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:07.298877 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ba35557c-7e83-4e76-966c-6bd98124864c","Type":"ContainerDied","Data":"845c18a4ac7773f8c1d3123622439748cb9906f34ed955f0dbd4efb5b6ec38a9"} Apr 23 17:57:07.299258 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:07.298902 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ba35557c-7e83-4e76-966c-6bd98124864c","Type":"ContainerStarted","Data":"fa16ff01a4568869a90824a9aa2507c9424884363c5cc8e7bfb7e3f65c8cfa1e"} Apr 23 17:57:08.307448 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:08.307409 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ba35557c-7e83-4e76-966c-6bd98124864c","Type":"ContainerStarted","Data":"8d206c6873404999ab3675574ba3ab2dc62bbd3639c7b025080ab77b59c3e55e"} Apr 23 17:57:08.307448 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:08.307448 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ba35557c-7e83-4e76-966c-6bd98124864c","Type":"ContainerStarted","Data":"2ac07a2a440dc2c2c77bc39b2a681c15d1382e5ab7d8f0c6ddc958f8e457a136"} Apr 23 17:57:08.307448 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:08.307457 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ba35557c-7e83-4e76-966c-6bd98124864c","Type":"ContainerStarted","Data":"48f67b2550d7930651b3e8d28cf4a347fac36e7e2e33814a14d74a6d3a2ce967"} Apr 23 17:57:08.308066 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:08.307465 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ba35557c-7e83-4e76-966c-6bd98124864c","Type":"ContainerStarted","Data":"e7c269e69b6ae9292d7f3e31f9a726b2867120f30e533383a37dbfed01febc9c"} Apr 23 17:57:08.308066 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:08.307474 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ba35557c-7e83-4e76-966c-6bd98124864c","Type":"ContainerStarted","Data":"9ac40a17a292e50c3c19dc07eb7bc42254166b271f76d1879a97e255a1799b42"} Apr 23 17:57:08.308066 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:08.307481 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ba35557c-7e83-4e76-966c-6bd98124864c","Type":"ContainerStarted","Data":"8c86e567701d2a300d74381de9c8cdda41027813489d98506a5834dd4d6af44f"} Apr 23 17:57:08.332266 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:08.332217 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.332204065 podStartE2EDuration="2.332204065s" podCreationTimestamp="2026-04-23 17:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:57:08.330356697 +0000 UTC m=+287.625635294" watchObservedRunningTime="2026-04-23 17:57:08.332204065 +0000 UTC m=+287.627482662" Apr 23 17:57:21.122266 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:21.122231 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 17:57:21.122266 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:21.122248 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 17:57:21.131441 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:21.131412 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 17:57:21.131670 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:21.131648 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 17:57:21.134368 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:57:21.134353 2566 kubelet.go:1628] "Image garbage collection succeeded" Apr 23 17:59:31.024494 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.024462 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-86cc847c5c-7jdzk"] Apr 23 17:59:31.028017 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.027996 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-7jdzk" Apr 23 17:59:31.031263 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.031243 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 23 17:59:31.031416 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.031397 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-2clj7\"" Apr 23 17:59:31.031496 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.031412 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 23 17:59:31.031699 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.031681 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 23 17:59:31.055557 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.055527 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-7jdzk"] Apr 23 17:59:31.198256 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.198227 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8xcm\" (UniqueName: \"kubernetes.io/projected/3662d547-b89a-4fd9-a546-64b76599844f-kube-api-access-z8xcm\") pod \"seaweedfs-86cc847c5c-7jdzk\" (UID: \"3662d547-b89a-4fd9-a546-64b76599844f\") " pod="kserve/seaweedfs-86cc847c5c-7jdzk" Apr 23 17:59:31.198256 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.198266 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3662d547-b89a-4fd9-a546-64b76599844f-data\") pod \"seaweedfs-86cc847c5c-7jdzk\" (UID: \"3662d547-b89a-4fd9-a546-64b76599844f\") " pod="kserve/seaweedfs-86cc847c5c-7jdzk" Apr 23 17:59:31.299573 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.299546 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z8xcm\" (UniqueName: \"kubernetes.io/projected/3662d547-b89a-4fd9-a546-64b76599844f-kube-api-access-z8xcm\") pod \"seaweedfs-86cc847c5c-7jdzk\" (UID: \"3662d547-b89a-4fd9-a546-64b76599844f\") " pod="kserve/seaweedfs-86cc847c5c-7jdzk" Apr 23 17:59:31.299732 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.299586 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3662d547-b89a-4fd9-a546-64b76599844f-data\") pod \"seaweedfs-86cc847c5c-7jdzk\" (UID: \"3662d547-b89a-4fd9-a546-64b76599844f\") " pod="kserve/seaweedfs-86cc847c5c-7jdzk" Apr 23 17:59:31.299928 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.299902 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3662d547-b89a-4fd9-a546-64b76599844f-data\") pod \"seaweedfs-86cc847c5c-7jdzk\" (UID: \"3662d547-b89a-4fd9-a546-64b76599844f\") " pod="kserve/seaweedfs-86cc847c5c-7jdzk" Apr 23 17:59:31.311010 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.310989 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8xcm\" (UniqueName: \"kubernetes.io/projected/3662d547-b89a-4fd9-a546-64b76599844f-kube-api-access-z8xcm\") pod \"seaweedfs-86cc847c5c-7jdzk\" (UID: \"3662d547-b89a-4fd9-a546-64b76599844f\") " pod="kserve/seaweedfs-86cc847c5c-7jdzk" Apr 23 17:59:31.337791 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.337768 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-7jdzk" Apr 23 17:59:31.468123 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.468098 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-7jdzk"] Apr 23 17:59:31.470801 ip-10-0-136-172 kubenswrapper[2566]: W0423 17:59:31.470765 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3662d547_b89a_4fd9_a546_64b76599844f.slice/crio-8cb9a0192e4eae3ff643efd3878fad1e440a99362b6ebb77b298a4b96d70fe32 WatchSource:0}: Error finding container 8cb9a0192e4eae3ff643efd3878fad1e440a99362b6ebb77b298a4b96d70fe32: Status 404 returned error can't find the container with id 8cb9a0192e4eae3ff643efd3878fad1e440a99362b6ebb77b298a4b96d70fe32 Apr 23 17:59:31.472191 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.472172 2566 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:59:31.779916 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:31.779760 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-7jdzk" event={"ID":"3662d547-b89a-4fd9-a546-64b76599844f","Type":"ContainerStarted","Data":"8cb9a0192e4eae3ff643efd3878fad1e440a99362b6ebb77b298a4b96d70fe32"} Apr 23 17:59:34.791287 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:34.791253 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-7jdzk" event={"ID":"3662d547-b89a-4fd9-a546-64b76599844f","Type":"ContainerStarted","Data":"d930cca43aee16367b2e5e615ff253b166066a1b9eeafd196105fdbafd5d580f"} Apr 23 17:59:34.791666 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:34.791442 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/seaweedfs-86cc847c5c-7jdzk" Apr 23 17:59:34.816546 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:34.816494 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-86cc847c5c-7jdzk" podStartSLOduration=2.038779393 podStartE2EDuration="4.816479634s" podCreationTimestamp="2026-04-23 17:59:30 +0000 UTC" firstStartedPulling="2026-04-23 17:59:31.472332244 +0000 UTC m=+430.767610819" lastFinishedPulling="2026-04-23 17:59:34.250032484 +0000 UTC m=+433.545311060" observedRunningTime="2026-04-23 17:59:34.813728294 +0000 UTC m=+434.109006892" watchObservedRunningTime="2026-04-23 17:59:34.816479634 +0000 UTC m=+434.111758230" Apr 23 17:59:40.796364 ip-10-0-136-172 kubenswrapper[2566]: I0423 17:59:40.796327 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/seaweedfs-86cc847c5c-7jdzk" Apr 23 18:00:39.044413 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.044373 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/odh-model-controller-696fc77849-sxpn7"] Apr 23 18:00:39.047219 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.047200 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-sxpn7" Apr 23 18:00:39.049601 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.049571 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-webhook-cert\"" Apr 23 18:00:39.049601 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.049585 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-dockercfg-fhdfx\"" Apr 23 18:00:39.061960 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.061918 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-sxpn7"] Apr 23 18:00:39.136585 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.136541 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5vfb\" (UniqueName: \"kubernetes.io/projected/4bb588be-ae32-4e65-a5f9-3ebc133a9691-kube-api-access-n5vfb\") pod \"odh-model-controller-696fc77849-sxpn7\" (UID: \"4bb588be-ae32-4e65-a5f9-3ebc133a9691\") " pod="kserve/odh-model-controller-696fc77849-sxpn7" Apr 23 18:00:39.136762 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.136697 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bb588be-ae32-4e65-a5f9-3ebc133a9691-cert\") pod \"odh-model-controller-696fc77849-sxpn7\" (UID: \"4bb588be-ae32-4e65-a5f9-3ebc133a9691\") " pod="kserve/odh-model-controller-696fc77849-sxpn7" Apr 23 18:00:39.237396 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.237351 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bb588be-ae32-4e65-a5f9-3ebc133a9691-cert\") pod \"odh-model-controller-696fc77849-sxpn7\" (UID: \"4bb588be-ae32-4e65-a5f9-3ebc133a9691\") " pod="kserve/odh-model-controller-696fc77849-sxpn7" Apr 23 18:00:39.237586 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.237417 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n5vfb\" (UniqueName: \"kubernetes.io/projected/4bb588be-ae32-4e65-a5f9-3ebc133a9691-kube-api-access-n5vfb\") pod \"odh-model-controller-696fc77849-sxpn7\" (UID: \"4bb588be-ae32-4e65-a5f9-3ebc133a9691\") " pod="kserve/odh-model-controller-696fc77849-sxpn7" Apr 23 18:00:39.240003 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.239978 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bb588be-ae32-4e65-a5f9-3ebc133a9691-cert\") pod \"odh-model-controller-696fc77849-sxpn7\" (UID: \"4bb588be-ae32-4e65-a5f9-3ebc133a9691\") " pod="kserve/odh-model-controller-696fc77849-sxpn7" Apr 23 18:00:39.248246 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.248223 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5vfb\" (UniqueName: \"kubernetes.io/projected/4bb588be-ae32-4e65-a5f9-3ebc133a9691-kube-api-access-n5vfb\") pod \"odh-model-controller-696fc77849-sxpn7\" (UID: \"4bb588be-ae32-4e65-a5f9-3ebc133a9691\") " pod="kserve/odh-model-controller-696fc77849-sxpn7" Apr 23 18:00:39.359180 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.359097 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-sxpn7" Apr 23 18:00:39.482000 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:39.481975 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-sxpn7"] Apr 23 18:00:39.484573 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:00:39.484543 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bb588be_ae32_4e65_a5f9_3ebc133a9691.slice/crio-9f3ec7765f4a3d41dd8aad498687fca80fcc300e20b2d381f8bb483a190801f8 WatchSource:0}: Error finding container 9f3ec7765f4a3d41dd8aad498687fca80fcc300e20b2d381f8bb483a190801f8: Status 404 returned error can't find the container with id 9f3ec7765f4a3d41dd8aad498687fca80fcc300e20b2d381f8bb483a190801f8 Apr 23 18:00:40.020946 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:40.020904 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-sxpn7" event={"ID":"4bb588be-ae32-4e65-a5f9-3ebc133a9691","Type":"ContainerStarted","Data":"9f3ec7765f4a3d41dd8aad498687fca80fcc300e20b2d381f8bb483a190801f8"} Apr 23 18:00:43.033621 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:43.033560 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-sxpn7" event={"ID":"4bb588be-ae32-4e65-a5f9-3ebc133a9691","Type":"ContainerStarted","Data":"87a7169c4b1a6fc67d162ddddc3ea6fd7283393d85c090610de60c7f3c6d1c63"} Apr 23 18:00:43.034041 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:43.033700 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/odh-model-controller-696fc77849-sxpn7" Apr 23 18:00:43.056787 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:43.056733 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/odh-model-controller-696fc77849-sxpn7" podStartSLOduration=1.2712556560000001 podStartE2EDuration="4.05671816s" podCreationTimestamp="2026-04-23 18:00:39 +0000 UTC" firstStartedPulling="2026-04-23 18:00:39.485729087 +0000 UTC m=+498.781007662" lastFinishedPulling="2026-04-23 18:00:42.27119159 +0000 UTC m=+501.566470166" observedRunningTime="2026-04-23 18:00:43.055542999 +0000 UTC m=+502.350821596" watchObservedRunningTime="2026-04-23 18:00:43.05671816 +0000 UTC m=+502.351996757" Apr 23 18:00:54.039564 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:54.039530 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/odh-model-controller-696fc77849-sxpn7" Apr 23 18:00:54.859691 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:54.859650 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/s3-init-pss6t"] Apr 23 18:00:54.867173 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:54.867146 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-pss6t" Apr 23 18:00:54.867919 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:54.867895 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-init-pss6t"] Apr 23 18:00:54.973101 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:54.973060 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqr4m\" (UniqueName: \"kubernetes.io/projected/acda807c-12f0-4da8-9932-7882d0ba9f05-kube-api-access-zqr4m\") pod \"s3-init-pss6t\" (UID: \"acda807c-12f0-4da8-9932-7882d0ba9f05\") " pod="kserve/s3-init-pss6t" Apr 23 18:00:55.074357 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:55.074317 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zqr4m\" (UniqueName: \"kubernetes.io/projected/acda807c-12f0-4da8-9932-7882d0ba9f05-kube-api-access-zqr4m\") pod \"s3-init-pss6t\" (UID: \"acda807c-12f0-4da8-9932-7882d0ba9f05\") " pod="kserve/s3-init-pss6t" Apr 23 18:00:55.083094 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:55.083066 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqr4m\" (UniqueName: \"kubernetes.io/projected/acda807c-12f0-4da8-9932-7882d0ba9f05-kube-api-access-zqr4m\") pod \"s3-init-pss6t\" (UID: \"acda807c-12f0-4da8-9932-7882d0ba9f05\") " pod="kserve/s3-init-pss6t" Apr 23 18:00:55.198032 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:55.197944 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-pss6t" Apr 23 18:00:55.324623 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:55.324518 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-init-pss6t"] Apr 23 18:00:55.327841 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:00:55.327807 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacda807c_12f0_4da8_9932_7882d0ba9f05.slice/crio-12ea9b869d02d8f060aff08481f999e69564003fcaa3f9427fd2b33c52e9431b WatchSource:0}: Error finding container 12ea9b869d02d8f060aff08481f999e69564003fcaa3f9427fd2b33c52e9431b: Status 404 returned error can't find the container with id 12ea9b869d02d8f060aff08481f999e69564003fcaa3f9427fd2b33c52e9431b Apr 23 18:00:56.083053 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:00:56.083005 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-pss6t" event={"ID":"acda807c-12f0-4da8-9932-7882d0ba9f05","Type":"ContainerStarted","Data":"12ea9b869d02d8f060aff08481f999e69564003fcaa3f9427fd2b33c52e9431b"} Apr 23 18:01:00.101032 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:00.100944 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-pss6t" event={"ID":"acda807c-12f0-4da8-9932-7882d0ba9f05","Type":"ContainerStarted","Data":"0b6fbafcfeb444f19ca17ba6fb35114ea519c7b5cb7d2a1b4aea399995ed4d08"} Apr 23 18:01:00.116207 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:00.116157 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/s3-init-pss6t" podStartSLOduration=1.617471925 podStartE2EDuration="6.116140177s" podCreationTimestamp="2026-04-23 18:00:54 +0000 UTC" firstStartedPulling="2026-04-23 18:00:55.330116443 +0000 UTC m=+514.625395018" lastFinishedPulling="2026-04-23 18:00:59.828784696 +0000 UTC m=+519.124063270" observedRunningTime="2026-04-23 18:01:00.115712814 +0000 UTC m=+519.410991412" watchObservedRunningTime="2026-04-23 18:01:00.116140177 +0000 UTC m=+519.411418775" Apr 23 18:01:03.113435 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:03.113349 2566 generic.go:358] "Generic (PLEG): container finished" podID="acda807c-12f0-4da8-9932-7882d0ba9f05" containerID="0b6fbafcfeb444f19ca17ba6fb35114ea519c7b5cb7d2a1b4aea399995ed4d08" exitCode=0 Apr 23 18:01:03.113435 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:03.113410 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-pss6t" event={"ID":"acda807c-12f0-4da8-9932-7882d0ba9f05","Type":"ContainerDied","Data":"0b6fbafcfeb444f19ca17ba6fb35114ea519c7b5cb7d2a1b4aea399995ed4d08"} Apr 23 18:01:04.242005 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:04.241983 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-pss6t" Apr 23 18:01:04.252680 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:04.252654 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqr4m\" (UniqueName: \"kubernetes.io/projected/acda807c-12f0-4da8-9932-7882d0ba9f05-kube-api-access-zqr4m\") pod \"acda807c-12f0-4da8-9932-7882d0ba9f05\" (UID: \"acda807c-12f0-4da8-9932-7882d0ba9f05\") " Apr 23 18:01:04.255066 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:04.255042 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acda807c-12f0-4da8-9932-7882d0ba9f05-kube-api-access-zqr4m" (OuterVolumeSpecName: "kube-api-access-zqr4m") pod "acda807c-12f0-4da8-9932-7882d0ba9f05" (UID: "acda807c-12f0-4da8-9932-7882d0ba9f05"). InnerVolumeSpecName "kube-api-access-zqr4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:01:04.353840 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:04.353809 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zqr4m\" (UniqueName: \"kubernetes.io/projected/acda807c-12f0-4da8-9932-7882d0ba9f05-kube-api-access-zqr4m\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:01:05.121358 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:05.121326 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-pss6t" Apr 23 18:01:05.121531 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:05.121327 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-pss6t" event={"ID":"acda807c-12f0-4da8-9932-7882d0ba9f05","Type":"ContainerDied","Data":"12ea9b869d02d8f060aff08481f999e69564003fcaa3f9427fd2b33c52e9431b"} Apr 23 18:01:05.121531 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:05.121435 2566 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12ea9b869d02d8f060aff08481f999e69564003fcaa3f9427fd2b33c52e9431b" Apr 23 18:01:15.622848 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.622813 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m"] Apr 23 18:01:15.623445 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.623223 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="acda807c-12f0-4da8-9932-7882d0ba9f05" containerName="s3-init" Apr 23 18:01:15.623445 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.623235 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="acda807c-12f0-4da8-9932-7882d0ba9f05" containerName="s3-init" Apr 23 18:01:15.623445 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.623346 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="acda807c-12f0-4da8-9932-7882d0ba9f05" containerName="s3-init" Apr 23 18:01:15.627227 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.627202 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.630190 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.630169 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 23 18:01:15.630504 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.630488 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\"" Apr 23 18:01:15.630600 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.630509 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-jv9tx\"" Apr 23 18:01:15.630600 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.630535 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-sklearn-graph-1-predictor-serving-cert\"" Apr 23 18:01:15.630600 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.630588 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"kube-root-ca.crt\"" Apr 23 18:01:15.637428 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.637403 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m"] Apr 23 18:01:15.653847 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.653816 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/0384d20a-86ac-4a4d-85e3-6e6f1f775895-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.653950 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.653885 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0384d20a-86ac-4a4d-85e3-6e6f1f775895-proxy-tls\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.653950 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.653932 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kserve-provision-location\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.653950 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.653948 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt5p4\" (UniqueName: \"kubernetes.io/projected/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kube-api-access-rt5p4\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.754736 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.754708 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0384d20a-86ac-4a4d-85e3-6e6f1f775895-proxy-tls\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.754930 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.754750 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kserve-provision-location\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.754930 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.754771 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rt5p4\" (UniqueName: \"kubernetes.io/projected/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kube-api-access-rt5p4\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.754930 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.754797 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/0384d20a-86ac-4a4d-85e3-6e6f1f775895-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.755186 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.755163 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kserve-provision-location\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.755577 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.755552 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/0384d20a-86ac-4a4d-85e3-6e6f1f775895-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.757428 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.757408 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0384d20a-86ac-4a4d-85e3-6e6f1f775895-proxy-tls\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.766253 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.766232 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt5p4\" (UniqueName: \"kubernetes.io/projected/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kube-api-access-rt5p4\") pod \"isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:15.941124 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:15.941050 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:16.011139 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.008970 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6"] Apr 23 18:01:16.015166 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.015140 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.017548 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.017421 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-e7d75-kube-rbac-proxy-sar-config\"" Apr 23 18:01:16.017548 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.017514 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-e7d75-predictor-serving-cert\"" Apr 23 18:01:16.021036 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.020932 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6"] Apr 23 18:01:16.057917 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.057892 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b172450-c872-4357-bc94-d89fe33e4343-proxy-tls\") pod \"error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.058060 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.057951 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrjp\" (UniqueName: \"kubernetes.io/projected/6b172450-c872-4357-bc94-d89fe33e4343-kube-api-access-njrjp\") pod \"error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.058060 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.058032 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"error-404-isvc-e7d75-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/6b172450-c872-4357-bc94-d89fe33e4343-error-404-isvc-e7d75-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.096040 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.096012 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m"] Apr 23 18:01:16.098653 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:01:16.098626 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0384d20a_86ac_4a4d_85e3_6e6f1f775895.slice/crio-b2e9320f14236a3ba090e65d28974ab246a297a24e728f880e8621ce3bc54a13 WatchSource:0}: Error finding container b2e9320f14236a3ba090e65d28974ab246a297a24e728f880e8621ce3bc54a13: Status 404 returned error can't find the container with id b2e9320f14236a3ba090e65d28974ab246a297a24e728f880e8621ce3bc54a13 Apr 23 18:01:16.159486 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.159454 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-njrjp\" (UniqueName: \"kubernetes.io/projected/6b172450-c872-4357-bc94-d89fe33e4343-kube-api-access-njrjp\") pod \"error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.159641 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.159522 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"error-404-isvc-e7d75-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/6b172450-c872-4357-bc94-d89fe33e4343-error-404-isvc-e7d75-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.159641 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.159599 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b172450-c872-4357-bc94-d89fe33e4343-proxy-tls\") pod \"error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.160319 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.160280 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"error-404-isvc-e7d75-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/6b172450-c872-4357-bc94-d89fe33e4343-error-404-isvc-e7d75-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.162119 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.162102 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b172450-c872-4357-bc94-d89fe33e4343-proxy-tls\") pod \"error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.166462 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.166428 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" event={"ID":"0384d20a-86ac-4a4d-85e3-6e6f1f775895","Type":"ContainerStarted","Data":"b2e9320f14236a3ba090e65d28974ab246a297a24e728f880e8621ce3bc54a13"} Apr 23 18:01:16.167748 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.167729 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-njrjp\" (UniqueName: \"kubernetes.io/projected/6b172450-c872-4357-bc94-d89fe33e4343-kube-api-access-njrjp\") pod \"error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.331414 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.331369 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:16.471820 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:16.471786 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6"] Apr 23 18:01:16.475021 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:01:16.474993 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b172450_c872_4357_bc94_d89fe33e4343.slice/crio-6d236f8e6744397eed0bee3130b7f3699e31b494a861df55d3f0e5fb46455b89 WatchSource:0}: Error finding container 6d236f8e6744397eed0bee3130b7f3699e31b494a861df55d3f0e5fb46455b89: Status 404 returned error can't find the container with id 6d236f8e6744397eed0bee3130b7f3699e31b494a861df55d3f0e5fb46455b89 Apr 23 18:01:17.176938 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:17.176898 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" event={"ID":"6b172450-c872-4357-bc94-d89fe33e4343","Type":"ContainerStarted","Data":"6d236f8e6744397eed0bee3130b7f3699e31b494a861df55d3f0e5fb46455b89"} Apr 23 18:01:30.248262 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:30.248234 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" event={"ID":"0384d20a-86ac-4a4d-85e3-6e6f1f775895","Type":"ContainerStarted","Data":"250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4"} Apr 23 18:01:31.258651 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:31.258602 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" event={"ID":"6b172450-c872-4357-bc94-d89fe33e4343","Type":"ContainerStarted","Data":"c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d"} Apr 23 18:01:33.270459 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:33.270363 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" event={"ID":"6b172450-c872-4357-bc94-d89fe33e4343","Type":"ContainerStarted","Data":"06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f"} Apr 23 18:01:33.270869 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:33.270587 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:33.270869 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:33.270620 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:33.272146 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:33.272118 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 23 18:01:33.287613 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:33.287562 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" podStartSLOduration=1.905708106 podStartE2EDuration="18.287545869s" podCreationTimestamp="2026-04-23 18:01:15 +0000 UTC" firstStartedPulling="2026-04-23 18:01:16.477289443 +0000 UTC m=+535.772568018" lastFinishedPulling="2026-04-23 18:01:32.859127206 +0000 UTC m=+552.154405781" observedRunningTime="2026-04-23 18:01:33.286126714 +0000 UTC m=+552.581405310" watchObservedRunningTime="2026-04-23 18:01:33.287545869 +0000 UTC m=+552.582824466" Apr 23 18:01:34.273952 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:34.273918 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 23 18:01:35.277459 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:35.277426 2566 generic.go:358] "Generic (PLEG): container finished" podID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerID="250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4" exitCode=0 Apr 23 18:01:35.277827 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:35.277504 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" event={"ID":"0384d20a-86ac-4a4d-85e3-6e6f1f775895","Type":"ContainerDied","Data":"250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4"} Apr 23 18:01:39.279474 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:39.279439 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:01:39.280024 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:39.279995 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 23 18:01:42.309684 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:42.309653 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" event={"ID":"0384d20a-86ac-4a4d-85e3-6e6f1f775895","Type":"ContainerStarted","Data":"40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570"} Apr 23 18:01:42.310127 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:42.309691 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" event={"ID":"0384d20a-86ac-4a4d-85e3-6e6f1f775895","Type":"ContainerStarted","Data":"e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf"} Apr 23 18:01:42.310127 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:42.309956 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:42.329511 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:42.329460 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podStartSLOduration=1.629991985 podStartE2EDuration="27.329446763s" podCreationTimestamp="2026-04-23 18:01:15 +0000 UTC" firstStartedPulling="2026-04-23 18:01:16.100518693 +0000 UTC m=+535.395797268" lastFinishedPulling="2026-04-23 18:01:41.799973469 +0000 UTC m=+561.095252046" observedRunningTime="2026-04-23 18:01:42.327629929 +0000 UTC m=+561.622908531" watchObservedRunningTime="2026-04-23 18:01:42.329446763 +0000 UTC m=+561.624725359" Apr 23 18:01:43.312960 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:43.312926 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:43.314074 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:43.314048 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 23 18:01:44.316451 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:44.316406 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 23 18:01:49.280591 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:49.280555 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 23 18:01:49.320281 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:49.320251 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:01:49.320769 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:49.320746 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 23 18:01:59.280556 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:59.280516 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 23 18:01:59.321206 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:01:59.321170 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 23 18:02:09.280763 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:09.280720 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.36:8080: connect: connection refused" Apr 23 18:02:09.320785 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:09.320743 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 23 18:02:19.281457 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:19.281425 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:02:19.321497 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:19.321455 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 23 18:02:21.152968 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:21.152937 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:02:21.153700 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:21.153680 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:02:21.159381 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:21.159355 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:02:21.160176 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:21.160160 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:02:29.321004 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:29.320960 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 23 18:02:39.320808 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:39.320762 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 23 18:02:45.887374 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:45.887337 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6"] Apr 23 18:02:45.887913 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:45.887674 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kserve-container" containerID="cri-o://c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d" gracePeriod=30 Apr 23 18:02:45.887913 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:45.887718 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kube-rbac-proxy" containerID="cri-o://06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f" gracePeriod=30 Apr 23 18:02:46.033884 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.033849 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk"] Apr 23 18:02:46.037570 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.037553 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.039457 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.039440 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-3d086-kube-rbac-proxy-sar-config\"" Apr 23 18:02:46.039541 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.039444 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-3d086-predictor-serving-cert\"" Apr 23 18:02:46.046092 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.046066 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk"] Apr 23 18:02:46.086362 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.086290 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-proxy-tls\") pod \"error-404-isvc-3d086-predictor-69454595bf-z64wk\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.086536 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.086370 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"error-404-isvc-3d086-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-error-404-isvc-3d086-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-3d086-predictor-69454595bf-z64wk\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.086536 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.086472 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfvq\" (UniqueName: \"kubernetes.io/projected/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-kube-api-access-qrfvq\") pod \"error-404-isvc-3d086-predictor-69454595bf-z64wk\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.187220 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.187132 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-proxy-tls\") pod \"error-404-isvc-3d086-predictor-69454595bf-z64wk\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.187220 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.187165 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"error-404-isvc-3d086-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-error-404-isvc-3d086-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-3d086-predictor-69454595bf-z64wk\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.187220 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.187217 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfvq\" (UniqueName: \"kubernetes.io/projected/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-kube-api-access-qrfvq\") pod \"error-404-isvc-3d086-predictor-69454595bf-z64wk\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.187899 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.187875 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"error-404-isvc-3d086-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-error-404-isvc-3d086-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-3d086-predictor-69454595bf-z64wk\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.189963 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.189931 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-proxy-tls\") pod \"error-404-isvc-3d086-predictor-69454595bf-z64wk\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.196149 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.196122 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfvq\" (UniqueName: \"kubernetes.io/projected/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-kube-api-access-qrfvq\") pod \"error-404-isvc-3d086-predictor-69454595bf-z64wk\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.351006 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.350965 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:46.478457 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.478431 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk"] Apr 23 18:02:46.481361 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:02:46.481334 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6005bc75_ebb7_4cca_bc84_3cc0f0866a27.slice/crio-d721a3a8f31403abc6e4f6c06ee58daa3b935574e8e73a58d3bde06412e80c25 WatchSource:0}: Error finding container d721a3a8f31403abc6e4f6c06ee58daa3b935574e8e73a58d3bde06412e80c25: Status 404 returned error can't find the container with id d721a3a8f31403abc6e4f6c06ee58daa3b935574e8e73a58d3bde06412e80c25 Apr 23 18:02:46.532426 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.532401 2566 generic.go:358] "Generic (PLEG): container finished" podID="6b172450-c872-4357-bc94-d89fe33e4343" containerID="06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f" exitCode=2 Apr 23 18:02:46.532534 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.532476 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" event={"ID":"6b172450-c872-4357-bc94-d89fe33e4343","Type":"ContainerDied","Data":"06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f"} Apr 23 18:02:46.533527 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:46.533508 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" event={"ID":"6005bc75-ebb7-4cca-bc84-3cc0f0866a27","Type":"ContainerStarted","Data":"d721a3a8f31403abc6e4f6c06ee58daa3b935574e8e73a58d3bde06412e80c25"} Apr 23 18:02:47.538807 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:47.538765 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" event={"ID":"6005bc75-ebb7-4cca-bc84-3cc0f0866a27","Type":"ContainerStarted","Data":"a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af"} Apr 23 18:02:47.538807 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:47.538806 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" event={"ID":"6005bc75-ebb7-4cca-bc84-3cc0f0866a27","Type":"ContainerStarted","Data":"0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2"} Apr 23 18:02:47.539213 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:47.538893 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:47.539213 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:47.539002 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:47.540376 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:47.540352 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 23 18:02:47.557391 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:47.557347 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podStartSLOduration=1.557336042 podStartE2EDuration="1.557336042s" podCreationTimestamp="2026-04-23 18:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:02:47.554834571 +0000 UTC m=+626.850113167" watchObservedRunningTime="2026-04-23 18:02:47.557336042 +0000 UTC m=+626.852614703" Apr 23 18:02:48.542550 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:48.542511 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 23 18:02:48.934846 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:48.934823 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:02:49.014005 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.013974 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njrjp\" (UniqueName: \"kubernetes.io/projected/6b172450-c872-4357-bc94-d89fe33e4343-kube-api-access-njrjp\") pod \"6b172450-c872-4357-bc94-d89fe33e4343\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " Apr 23 18:02:49.014181 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.014020 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"error-404-isvc-e7d75-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/6b172450-c872-4357-bc94-d89fe33e4343-error-404-isvc-e7d75-kube-rbac-proxy-sar-config\") pod \"6b172450-c872-4357-bc94-d89fe33e4343\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " Apr 23 18:02:49.014181 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.014074 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b172450-c872-4357-bc94-d89fe33e4343-proxy-tls\") pod \"6b172450-c872-4357-bc94-d89fe33e4343\" (UID: \"6b172450-c872-4357-bc94-d89fe33e4343\") " Apr 23 18:02:49.014493 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.014457 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b172450-c872-4357-bc94-d89fe33e4343-error-404-isvc-e7d75-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "error-404-isvc-e7d75-kube-rbac-proxy-sar-config") pod "6b172450-c872-4357-bc94-d89fe33e4343" (UID: "6b172450-c872-4357-bc94-d89fe33e4343"). InnerVolumeSpecName "error-404-isvc-e7d75-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:02:49.016255 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.016236 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b172450-c872-4357-bc94-d89fe33e4343-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "6b172450-c872-4357-bc94-d89fe33e4343" (UID: "6b172450-c872-4357-bc94-d89fe33e4343"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:02:49.016408 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.016392 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b172450-c872-4357-bc94-d89fe33e4343-kube-api-access-njrjp" (OuterVolumeSpecName: "kube-api-access-njrjp") pod "6b172450-c872-4357-bc94-d89fe33e4343" (UID: "6b172450-c872-4357-bc94-d89fe33e4343"). InnerVolumeSpecName "kube-api-access-njrjp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:02:49.114821 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.114740 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b172450-c872-4357-bc94-d89fe33e4343-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:02:49.114821 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.114772 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-njrjp\" (UniqueName: \"kubernetes.io/projected/6b172450-c872-4357-bc94-d89fe33e4343-kube-api-access-njrjp\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:02:49.114821 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.114786 2566 reconciler_common.go:299] "Volume detached for volume \"error-404-isvc-e7d75-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/6b172450-c872-4357-bc94-d89fe33e4343-error-404-isvc-e7d75-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:02:49.321510 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.321483 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:02:49.547273 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.547230 2566 generic.go:358] "Generic (PLEG): container finished" podID="6b172450-c872-4357-bc94-d89fe33e4343" containerID="c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d" exitCode=0 Apr 23 18:02:49.547721 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.547363 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" event={"ID":"6b172450-c872-4357-bc94-d89fe33e4343","Type":"ContainerDied","Data":"c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d"} Apr 23 18:02:49.547721 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.547373 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" Apr 23 18:02:49.547721 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.547399 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6" event={"ID":"6b172450-c872-4357-bc94-d89fe33e4343","Type":"ContainerDied","Data":"6d236f8e6744397eed0bee3130b7f3699e31b494a861df55d3f0e5fb46455b89"} Apr 23 18:02:49.547721 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.547419 2566 scope.go:117] "RemoveContainer" containerID="06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f" Apr 23 18:02:49.556409 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.556393 2566 scope.go:117] "RemoveContainer" containerID="c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d" Apr 23 18:02:49.563359 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.563289 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6"] Apr 23 18:02:49.565280 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.565262 2566 scope.go:117] "RemoveContainer" containerID="06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f" Apr 23 18:02:49.565547 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:02:49.565528 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f\": container with ID starting with 06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f not found: ID does not exist" containerID="06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f" Apr 23 18:02:49.565636 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.565551 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f"} err="failed to get container status \"06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f\": rpc error: code = NotFound desc = could not find container \"06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f\": container with ID starting with 06e843007ceee2a8db8d4346e4992d8cc154561dba9431267b6c485e7061389f not found: ID does not exist" Apr 23 18:02:49.565636 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.565573 2566 scope.go:117] "RemoveContainer" containerID="c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d" Apr 23 18:02:49.565828 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.565809 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e7d75-predictor-7d6bff459f-l5fn6"] Apr 23 18:02:49.565868 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:02:49.565815 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d\": container with ID starting with c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d not found: ID does not exist" containerID="c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d" Apr 23 18:02:49.565903 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:49.565865 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d"} err="failed to get container status \"c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d\": rpc error: code = NotFound desc = could not find container \"c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d\": container with ID starting with c1602bbc64409dacee2143d39532b1c9ca638d1658bce516b3dcbf970f01b90d not found: ID does not exist" Apr 23 18:02:51.254025 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:51.253987 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b172450-c872-4357-bc94-d89fe33e4343" path="/var/lib/kubelet/pods/6b172450-c872-4357-bc94-d89fe33e4343/volumes" Apr 23 18:02:53.548102 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:53.548075 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:02:53.548640 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:02:53.548611 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 23 18:03:03.548792 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:03.548751 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 23 18:03:13.548562 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:13.548520 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 23 18:03:23.549165 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:23.549120 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 23 18:03:25.857192 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.857158 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m"] Apr 23 18:03:25.857666 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.857523 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" containerID="cri-o://e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf" gracePeriod=30 Apr 23 18:03:25.857666 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.857556 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kube-rbac-proxy" containerID="cri-o://40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570" gracePeriod=30 Apr 23 18:03:25.975233 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.975201 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf"] Apr 23 18:03:25.975688 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.975668 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kserve-container" Apr 23 18:03:25.975764 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.975691 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kserve-container" Apr 23 18:03:25.975764 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.975715 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kube-rbac-proxy" Apr 23 18:03:25.975764 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.975724 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kube-rbac-proxy" Apr 23 18:03:25.976196 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.975864 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kube-rbac-proxy" Apr 23 18:03:25.976196 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.975879 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b172450-c872-4357-bc94-d89fe33e4343" containerName="kserve-container" Apr 23 18:03:25.978972 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.978954 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:25.981522 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.981497 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-0065f-predictor-serving-cert\"" Apr 23 18:03:25.981717 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.981700 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-0065f-kube-rbac-proxy-sar-config\"" Apr 23 18:03:25.991484 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:25.990368 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf"] Apr 23 18:03:26.038400 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.038366 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"error-404-isvc-0065f-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-error-404-isvc-0065f-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-0065f-predictor-585bf4fc7-ff7tf\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:26.038542 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.038409 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lgxp\" (UniqueName: \"kubernetes.io/projected/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-kube-api-access-9lgxp\") pod \"error-404-isvc-0065f-predictor-585bf4fc7-ff7tf\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:26.038542 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.038449 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-proxy-tls\") pod \"error-404-isvc-0065f-predictor-585bf4fc7-ff7tf\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:26.139798 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.139715 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"error-404-isvc-0065f-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-error-404-isvc-0065f-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-0065f-predictor-585bf4fc7-ff7tf\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:26.139798 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.139757 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9lgxp\" (UniqueName: \"kubernetes.io/projected/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-kube-api-access-9lgxp\") pod \"error-404-isvc-0065f-predictor-585bf4fc7-ff7tf\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:26.140003 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.139892 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-proxy-tls\") pod \"error-404-isvc-0065f-predictor-585bf4fc7-ff7tf\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:26.140085 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:03:26.140066 2566 secret.go:189] Couldn't get secret kserve-ci-e2e-test/error-404-isvc-0065f-predictor-serving-cert: secret "error-404-isvc-0065f-predictor-serving-cert" not found Apr 23 18:03:26.140136 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:03:26.140129 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-proxy-tls podName:1e3a4970-c2f9-4767-9862-9e8cc797ccc3 nodeName:}" failed. No retries permitted until 2026-04-23 18:03:26.640109455 +0000 UTC m=+665.935388030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-proxy-tls") pod "error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" (UID: "1e3a4970-c2f9-4767-9862-9e8cc797ccc3") : secret "error-404-isvc-0065f-predictor-serving-cert" not found Apr 23 18:03:26.140508 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.140484 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"error-404-isvc-0065f-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-error-404-isvc-0065f-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-0065f-predictor-585bf4fc7-ff7tf\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:26.150616 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.150584 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lgxp\" (UniqueName: \"kubernetes.io/projected/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-kube-api-access-9lgxp\") pod \"error-404-isvc-0065f-predictor-585bf4fc7-ff7tf\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:26.644285 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.644251 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-proxy-tls\") pod \"error-404-isvc-0065f-predictor-585bf4fc7-ff7tf\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:26.646804 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.646777 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-proxy-tls\") pod \"error-404-isvc-0065f-predictor-585bf4fc7-ff7tf\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:26.683855 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.683823 2566 generic.go:358] "Generic (PLEG): container finished" podID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerID="40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570" exitCode=2 Apr 23 18:03:26.684003 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.683899 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" event={"ID":"0384d20a-86ac-4a4d-85e3-6e6f1f775895","Type":"ContainerDied","Data":"40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570"} Apr 23 18:03:26.892111 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:26.892074 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:27.018169 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:27.018142 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf"] Apr 23 18:03:27.020769 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:03:27.020739 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e3a4970_c2f9_4767_9862_9e8cc797ccc3.slice/crio-bef4bc8cce08b368ca67818d6cf6b1c6ac5110003e19deb56ef981b566155209 WatchSource:0}: Error finding container bef4bc8cce08b368ca67818d6cf6b1c6ac5110003e19deb56ef981b566155209: Status 404 returned error can't find the container with id bef4bc8cce08b368ca67818d6cf6b1c6ac5110003e19deb56ef981b566155209 Apr 23 18:03:27.690036 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:27.689998 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" event={"ID":"1e3a4970-c2f9-4767-9862-9e8cc797ccc3","Type":"ContainerStarted","Data":"b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686"} Apr 23 18:03:27.690224 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:27.690047 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" event={"ID":"1e3a4970-c2f9-4767-9862-9e8cc797ccc3","Type":"ContainerStarted","Data":"0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be"} Apr 23 18:03:27.690224 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:27.690060 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" event={"ID":"1e3a4970-c2f9-4767-9862-9e8cc797ccc3","Type":"ContainerStarted","Data":"bef4bc8cce08b368ca67818d6cf6b1c6ac5110003e19deb56ef981b566155209"} Apr 23 18:03:27.690224 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:27.690150 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:27.709689 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:27.709649 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" podStartSLOduration=2.709634986 podStartE2EDuration="2.709634986s" podCreationTimestamp="2026-04-23 18:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:03:27.707284286 +0000 UTC m=+667.002562895" watchObservedRunningTime="2026-04-23 18:03:27.709634986 +0000 UTC m=+667.004913582" Apr 23 18:03:28.694115 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:28.694076 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:28.695654 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:28.695622 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 23 18:03:29.317268 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:29.317225 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.133.0.35:8643/healthz\": dial tcp 10.133.0.35:8643: connect: connection refused" Apr 23 18:03:29.321630 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:29.321591 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 23 18:03:29.698874 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:29.698775 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 23 18:03:30.308644 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.308622 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:03:30.379016 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.378938 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/0384d20a-86ac-4a4d-85e3-6e6f1f775895-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\") pod \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " Apr 23 18:03:30.379016 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.378984 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kserve-provision-location\") pod \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " Apr 23 18:03:30.379016 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.379003 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt5p4\" (UniqueName: \"kubernetes.io/projected/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kube-api-access-rt5p4\") pod \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " Apr 23 18:03:30.379266 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.379075 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0384d20a-86ac-4a4d-85e3-6e6f1f775895-proxy-tls\") pod \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\" (UID: \"0384d20a-86ac-4a4d-85e3-6e6f1f775895\") " Apr 23 18:03:30.379353 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.379330 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "0384d20a-86ac-4a4d-85e3-6e6f1f775895" (UID: "0384d20a-86ac-4a4d-85e3-6e6f1f775895"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:03:30.379424 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.379356 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0384d20a-86ac-4a4d-85e3-6e6f1f775895-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "isvc-sklearn-graph-1-kube-rbac-proxy-sar-config") pod "0384d20a-86ac-4a4d-85e3-6e6f1f775895" (UID: "0384d20a-86ac-4a4d-85e3-6e6f1f775895"). InnerVolumeSpecName "isvc-sklearn-graph-1-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:03:30.381246 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.381222 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384d20a-86ac-4a4d-85e3-6e6f1f775895-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0384d20a-86ac-4a4d-85e3-6e6f1f775895" (UID: "0384d20a-86ac-4a4d-85e3-6e6f1f775895"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:03:30.381358 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.381242 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kube-api-access-rt5p4" (OuterVolumeSpecName: "kube-api-access-rt5p4") pod "0384d20a-86ac-4a4d-85e3-6e6f1f775895" (UID: "0384d20a-86ac-4a4d-85e3-6e6f1f775895"). InnerVolumeSpecName "kube-api-access-rt5p4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:03:30.479886 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.479849 2566 reconciler_common.go:299] "Volume detached for volume \"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/0384d20a-86ac-4a4d-85e3-6e6f1f775895-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:03:30.479886 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.479880 2566 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kserve-provision-location\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:03:30.479886 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.479891 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rt5p4\" (UniqueName: \"kubernetes.io/projected/0384d20a-86ac-4a4d-85e3-6e6f1f775895-kube-api-access-rt5p4\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:03:30.480123 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.479900 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0384d20a-86ac-4a4d-85e3-6e6f1f775895-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:03:30.704757 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.704659 2566 generic.go:358] "Generic (PLEG): container finished" podID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerID="e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf" exitCode=0 Apr 23 18:03:30.705199 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.704777 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" event={"ID":"0384d20a-86ac-4a4d-85e3-6e6f1f775895","Type":"ContainerDied","Data":"e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf"} Apr 23 18:03:30.705199 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.704803 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" event={"ID":"0384d20a-86ac-4a4d-85e3-6e6f1f775895","Type":"ContainerDied","Data":"b2e9320f14236a3ba090e65d28974ab246a297a24e728f880e8621ce3bc54a13"} Apr 23 18:03:30.705199 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.704818 2566 scope.go:117] "RemoveContainer" containerID="40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570" Apr 23 18:03:30.705199 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.704821 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m" Apr 23 18:03:30.715192 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.715170 2566 scope.go:117] "RemoveContainer" containerID="e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf" Apr 23 18:03:30.724005 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.723982 2566 scope.go:117] "RemoveContainer" containerID="250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4" Apr 23 18:03:30.727768 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.727748 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m"] Apr 23 18:03:30.730954 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.730934 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-84c4f457f6-x2b4m"] Apr 23 18:03:30.732000 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.731987 2566 scope.go:117] "RemoveContainer" containerID="40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570" Apr 23 18:03:30.732243 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:03:30.732226 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570\": container with ID starting with 40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570 not found: ID does not exist" containerID="40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570" Apr 23 18:03:30.732323 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.732250 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570"} err="failed to get container status \"40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570\": rpc error: code = NotFound desc = could not find container \"40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570\": container with ID starting with 40b1c47ed5a47e6c5058503ef0058b802fed31031d97c5d5246196177bf17570 not found: ID does not exist" Apr 23 18:03:30.732323 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.732268 2566 scope.go:117] "RemoveContainer" containerID="e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf" Apr 23 18:03:30.732579 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:03:30.732560 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf\": container with ID starting with e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf not found: ID does not exist" containerID="e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf" Apr 23 18:03:30.732624 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.732586 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf"} err="failed to get container status \"e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf\": rpc error: code = NotFound desc = could not find container \"e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf\": container with ID starting with e0368a05e4569bc2312ac9072736e2d7c57c065b90d9eba7b941360f131134cf not found: ID does not exist" Apr 23 18:03:30.732624 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.732603 2566 scope.go:117] "RemoveContainer" containerID="250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4" Apr 23 18:03:30.732809 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:03:30.732794 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4\": container with ID starting with 250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4 not found: ID does not exist" containerID="250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4" Apr 23 18:03:30.732854 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:30.732816 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4"} err="failed to get container status \"250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4\": rpc error: code = NotFound desc = could not find container \"250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4\": container with ID starting with 250aa138829a2c53b72882934eec75463a6481eee2e635353ceb20480ddc29c4 not found: ID does not exist" Apr 23 18:03:31.256472 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:31.256430 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" path="/var/lib/kubelet/pods/0384d20a-86ac-4a4d-85e3-6e6f1f775895/volumes" Apr 23 18:03:33.549865 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:33.549838 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:03:34.703604 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:34.703574 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:03:34.704087 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:34.704054 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 23 18:03:44.705005 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:44.704960 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 23 18:03:54.704666 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:03:54.704620 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 23 18:04:04.704606 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:04:04.704519 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.38:8080: connect: connection refused" Apr 23 18:04:14.705446 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:04:14.705411 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:07:21.182545 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:07:21.182476 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:07:21.184584 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:07:21.184558 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:07:21.189464 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:07:21.189441 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:07:21.191151 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:07:21.191131 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:12:00.725533 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.725499 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk"] Apr 23 18:12:00.726041 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.725787 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" containerID="cri-o://0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2" gracePeriod=30 Apr 23 18:12:00.726041 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.725832 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kube-rbac-proxy" containerID="cri-o://a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af" gracePeriod=30 Apr 23 18:12:00.813992 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.813958 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh"] Apr 23 18:12:00.814410 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.814394 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kube-rbac-proxy" Apr 23 18:12:00.814499 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.814414 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kube-rbac-proxy" Apr 23 18:12:00.814499 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.814439 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="storage-initializer" Apr 23 18:12:00.814499 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.814447 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="storage-initializer" Apr 23 18:12:00.814499 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.814459 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" Apr 23 18:12:00.814499 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.814468 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" Apr 23 18:12:00.814677 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.814581 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kserve-container" Apr 23 18:12:00.814677 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.814597 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="0384d20a-86ac-4a4d-85e3-6e6f1f775895" containerName="kube-rbac-proxy" Apr 23 18:12:00.817891 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.817874 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:00.819577 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.819554 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-226e9-predictor-serving-cert\"" Apr 23 18:12:00.819691 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.819647 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-226e9-kube-rbac-proxy-sar-config\"" Apr 23 18:12:00.826660 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.826044 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"error-404-isvc-226e9-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/e3581511-69fd-45a6-872b-b4273dc7d9be-error-404-isvc-226e9-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-226e9-predictor-76db5f4c75-z7mjh\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:00.826660 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.826122 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzn8f\" (UniqueName: \"kubernetes.io/projected/e3581511-69fd-45a6-872b-b4273dc7d9be-kube-api-access-hzn8f\") pod \"error-404-isvc-226e9-predictor-76db5f4c75-z7mjh\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:00.826660 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.826263 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3581511-69fd-45a6-872b-b4273dc7d9be-proxy-tls\") pod \"error-404-isvc-226e9-predictor-76db5f4c75-z7mjh\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:00.827951 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.827932 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh"] Apr 23 18:12:00.927628 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.927599 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3581511-69fd-45a6-872b-b4273dc7d9be-proxy-tls\") pod \"error-404-isvc-226e9-predictor-76db5f4c75-z7mjh\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:00.927783 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.927667 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"error-404-isvc-226e9-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/e3581511-69fd-45a6-872b-b4273dc7d9be-error-404-isvc-226e9-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-226e9-predictor-76db5f4c75-z7mjh\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:00.927783 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.927713 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hzn8f\" (UniqueName: \"kubernetes.io/projected/e3581511-69fd-45a6-872b-b4273dc7d9be-kube-api-access-hzn8f\") pod \"error-404-isvc-226e9-predictor-76db5f4c75-z7mjh\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:00.927911 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:12:00.927774 2566 secret.go:189] Couldn't get secret kserve-ci-e2e-test/error-404-isvc-226e9-predictor-serving-cert: secret "error-404-isvc-226e9-predictor-serving-cert" not found Apr 23 18:12:00.927911 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:12:00.927852 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3581511-69fd-45a6-872b-b4273dc7d9be-proxy-tls podName:e3581511-69fd-45a6-872b-b4273dc7d9be nodeName:}" failed. No retries permitted until 2026-04-23 18:12:01.427830745 +0000 UTC m=+1180.723109333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e3581511-69fd-45a6-872b-b4273dc7d9be-proxy-tls") pod "error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" (UID: "e3581511-69fd-45a6-872b-b4273dc7d9be") : secret "error-404-isvc-226e9-predictor-serving-cert" not found Apr 23 18:12:00.928500 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.928448 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"error-404-isvc-226e9-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/e3581511-69fd-45a6-872b-b4273dc7d9be-error-404-isvc-226e9-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-226e9-predictor-76db5f4c75-z7mjh\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:00.936332 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:00.936279 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzn8f\" (UniqueName: \"kubernetes.io/projected/e3581511-69fd-45a6-872b-b4273dc7d9be-kube-api-access-hzn8f\") pod \"error-404-isvc-226e9-predictor-76db5f4c75-z7mjh\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:01.431986 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:01.431947 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3581511-69fd-45a6-872b-b4273dc7d9be-proxy-tls\") pod \"error-404-isvc-226e9-predictor-76db5f4c75-z7mjh\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:01.434641 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:01.434617 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3581511-69fd-45a6-872b-b4273dc7d9be-proxy-tls\") pod \"error-404-isvc-226e9-predictor-76db5f4c75-z7mjh\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:01.515025 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:01.514990 2566 generic.go:358] "Generic (PLEG): container finished" podID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerID="a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af" exitCode=2 Apr 23 18:12:01.515169 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:01.515063 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" event={"ID":"6005bc75-ebb7-4cca-bc84-3cc0f0866a27","Type":"ContainerDied","Data":"a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af"} Apr 23 18:12:01.729247 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:01.729139 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:01.863915 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:01.863870 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh"] Apr 23 18:12:01.867379 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:12:01.867347 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3581511_69fd_45a6_872b_b4273dc7d9be.slice/crio-0bc8c1985321843d0807cb6508813f6d0f7353a17dc1d8938dd453cc8cb4a6a2 WatchSource:0}: Error finding container 0bc8c1985321843d0807cb6508813f6d0f7353a17dc1d8938dd453cc8cb4a6a2: Status 404 returned error can't find the container with id 0bc8c1985321843d0807cb6508813f6d0f7353a17dc1d8938dd453cc8cb4a6a2 Apr 23 18:12:01.869204 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:01.869185 2566 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:12:02.520099 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:02.520068 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" event={"ID":"e3581511-69fd-45a6-872b-b4273dc7d9be","Type":"ContainerStarted","Data":"628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb"} Apr 23 18:12:02.520271 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:02.520109 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" event={"ID":"e3581511-69fd-45a6-872b-b4273dc7d9be","Type":"ContainerStarted","Data":"9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b"} Apr 23 18:12:02.520271 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:02.520125 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" event={"ID":"e3581511-69fd-45a6-872b-b4273dc7d9be","Type":"ContainerStarted","Data":"0bc8c1985321843d0807cb6508813f6d0f7353a17dc1d8938dd453cc8cb4a6a2"} Apr 23 18:12:02.520271 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:02.520213 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:02.551130 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:02.551082 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" podStartSLOduration=2.551064863 podStartE2EDuration="2.551064863s" podCreationTimestamp="2026-04-23 18:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:12:02.550727827 +0000 UTC m=+1181.846006420" watchObservedRunningTime="2026-04-23 18:12:02.551064863 +0000 UTC m=+1181.846343461" Apr 23 18:12:03.523627 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:03.523594 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:03.524931 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:03.524902 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 23 18:12:03.543168 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:03.543133 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.133.0.37:8643/healthz\": dial tcp 10.133.0.37:8643: connect: connection refused" Apr 23 18:12:03.548591 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:03.548560 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 23 18:12:03.978355 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:03.978333 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:12:04.053193 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.053106 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-proxy-tls\") pod \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " Apr 23 18:12:04.053193 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.053144 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"error-404-isvc-3d086-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-error-404-isvc-3d086-kube-rbac-proxy-sar-config\") pod \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " Apr 23 18:12:04.053193 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.053168 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrfvq\" (UniqueName: \"kubernetes.io/projected/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-kube-api-access-qrfvq\") pod \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\" (UID: \"6005bc75-ebb7-4cca-bc84-3cc0f0866a27\") " Apr 23 18:12:04.053565 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.053540 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-error-404-isvc-3d086-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "error-404-isvc-3d086-kube-rbac-proxy-sar-config") pod "6005bc75-ebb7-4cca-bc84-3cc0f0866a27" (UID: "6005bc75-ebb7-4cca-bc84-3cc0f0866a27"). InnerVolumeSpecName "error-404-isvc-3d086-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:12:04.055403 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.055379 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "6005bc75-ebb7-4cca-bc84-3cc0f0866a27" (UID: "6005bc75-ebb7-4cca-bc84-3cc0f0866a27"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:12:04.055490 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.055394 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-kube-api-access-qrfvq" (OuterVolumeSpecName: "kube-api-access-qrfvq") pod "6005bc75-ebb7-4cca-bc84-3cc0f0866a27" (UID: "6005bc75-ebb7-4cca-bc84-3cc0f0866a27"). InnerVolumeSpecName "kube-api-access-qrfvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:12:04.154152 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.154106 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:12:04.154152 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.154145 2566 reconciler_common.go:299] "Volume detached for volume \"error-404-isvc-3d086-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-error-404-isvc-3d086-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:12:04.154405 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.154163 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qrfvq\" (UniqueName: \"kubernetes.io/projected/6005bc75-ebb7-4cca-bc84-3cc0f0866a27-kube-api-access-qrfvq\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:12:04.527769 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.527684 2566 generic.go:358] "Generic (PLEG): container finished" podID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerID="0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2" exitCode=0 Apr 23 18:12:04.528197 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.527760 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" event={"ID":"6005bc75-ebb7-4cca-bc84-3cc0f0866a27","Type":"ContainerDied","Data":"0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2"} Apr 23 18:12:04.528197 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.527791 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" Apr 23 18:12:04.528197 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.527808 2566 scope.go:117] "RemoveContainer" containerID="a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af" Apr 23 18:12:04.528197 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.527798 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk" event={"ID":"6005bc75-ebb7-4cca-bc84-3cc0f0866a27","Type":"ContainerDied","Data":"d721a3a8f31403abc6e4f6c06ee58daa3b935574e8e73a58d3bde06412e80c25"} Apr 23 18:12:04.528433 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.528376 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 23 18:12:04.537521 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.537501 2566 scope.go:117] "RemoveContainer" containerID="0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2" Apr 23 18:12:04.545019 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.545004 2566 scope.go:117] "RemoveContainer" containerID="a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af" Apr 23 18:12:04.545253 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:12:04.545226 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af\": container with ID starting with a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af not found: ID does not exist" containerID="a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af" Apr 23 18:12:04.545335 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.545262 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af"} err="failed to get container status \"a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af\": rpc error: code = NotFound desc = could not find container \"a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af\": container with ID starting with a6b2512ea4970a987c321c86de9ab5e0539e1ae1d00eef0891de94d6d568a4af not found: ID does not exist" Apr 23 18:12:04.545335 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.545279 2566 scope.go:117] "RemoveContainer" containerID="0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2" Apr 23 18:12:04.545560 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:12:04.545542 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2\": container with ID starting with 0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2 not found: ID does not exist" containerID="0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2" Apr 23 18:12:04.545620 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.545569 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2"} err="failed to get container status \"0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2\": rpc error: code = NotFound desc = could not find container \"0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2\": container with ID starting with 0c441d8b946888d06901ebe3463bd62dabdbe790145c4f233832c585c67e4df2 not found: ID does not exist" Apr 23 18:12:04.549175 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.549153 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk"] Apr 23 18:12:04.552704 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:04.552682 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-3d086-predictor-69454595bf-z64wk"] Apr 23 18:12:05.254762 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:05.254719 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" path="/var/lib/kubelet/pods/6005bc75-ebb7-4cca-bc84-3cc0f0866a27/volumes" Apr 23 18:12:09.532992 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:09.532961 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:09.533512 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:09.533486 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 23 18:12:19.534176 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:19.534134 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 23 18:12:21.211439 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:21.211407 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:12:21.213823 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:21.213793 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:12:21.217729 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:21.217707 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:12:21.220206 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:21.220188 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:12:29.534326 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:29.534254 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 23 18:12:39.533944 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:39.533901 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 23 18:12:40.776529 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.776494 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf"] Apr 23 18:12:40.776984 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.776898 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kserve-container" containerID="cri-o://0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be" gracePeriod=30 Apr 23 18:12:40.777053 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.776995 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kube-rbac-proxy" containerID="cri-o://b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686" gracePeriod=30 Apr 23 18:12:40.830506 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.830472 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn"] Apr 23 18:12:40.830899 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.830881 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" Apr 23 18:12:40.830899 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.830899 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" Apr 23 18:12:40.831080 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.830935 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kube-rbac-proxy" Apr 23 18:12:40.831080 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.830943 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kube-rbac-proxy" Apr 23 18:12:40.831080 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.831014 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kserve-container" Apr 23 18:12:40.831080 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.831024 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="6005bc75-ebb7-4cca-bc84-3cc0f0866a27" containerName="kube-rbac-proxy" Apr 23 18:12:40.834430 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.834413 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:40.836006 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.835986 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-d864a-predictor-serving-cert\"" Apr 23 18:12:40.836135 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.836023 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-d864a-kube-rbac-proxy-sar-config\"" Apr 23 18:12:40.843901 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.843861 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn"] Apr 23 18:12:40.963889 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.963847 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"error-404-isvc-d864a-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/514bdb74-73a0-4a62-a268-22a2bb73d08c-error-404-isvc-d864a-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-d864a-predictor-f8b8f6449-xvnkn\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:40.963889 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.963895 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/514bdb74-73a0-4a62-a268-22a2bb73d08c-proxy-tls\") pod \"error-404-isvc-d864a-predictor-f8b8f6449-xvnkn\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:40.964169 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:40.963987 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zxx6\" (UniqueName: \"kubernetes.io/projected/514bdb74-73a0-4a62-a268-22a2bb73d08c-kube-api-access-2zxx6\") pod \"error-404-isvc-d864a-predictor-f8b8f6449-xvnkn\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:41.064439 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.064398 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zxx6\" (UniqueName: \"kubernetes.io/projected/514bdb74-73a0-4a62-a268-22a2bb73d08c-kube-api-access-2zxx6\") pod \"error-404-isvc-d864a-predictor-f8b8f6449-xvnkn\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:41.064629 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.064552 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"error-404-isvc-d864a-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/514bdb74-73a0-4a62-a268-22a2bb73d08c-error-404-isvc-d864a-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-d864a-predictor-f8b8f6449-xvnkn\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:41.064629 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.064587 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/514bdb74-73a0-4a62-a268-22a2bb73d08c-proxy-tls\") pod \"error-404-isvc-d864a-predictor-f8b8f6449-xvnkn\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:41.064778 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:12:41.064689 2566 secret.go:189] Couldn't get secret kserve-ci-e2e-test/error-404-isvc-d864a-predictor-serving-cert: secret "error-404-isvc-d864a-predictor-serving-cert" not found Apr 23 18:12:41.064778 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:12:41.064752 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/514bdb74-73a0-4a62-a268-22a2bb73d08c-proxy-tls podName:514bdb74-73a0-4a62-a268-22a2bb73d08c nodeName:}" failed. No retries permitted until 2026-04-23 18:12:41.564730853 +0000 UTC m=+1220.860009428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/514bdb74-73a0-4a62-a268-22a2bb73d08c-proxy-tls") pod "error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" (UID: "514bdb74-73a0-4a62-a268-22a2bb73d08c") : secret "error-404-isvc-d864a-predictor-serving-cert" not found Apr 23 18:12:41.065225 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.065200 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"error-404-isvc-d864a-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/514bdb74-73a0-4a62-a268-22a2bb73d08c-error-404-isvc-d864a-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-d864a-predictor-f8b8f6449-xvnkn\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:41.075788 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.075760 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zxx6\" (UniqueName: \"kubernetes.io/projected/514bdb74-73a0-4a62-a268-22a2bb73d08c-kube-api-access-2zxx6\") pod \"error-404-isvc-d864a-predictor-f8b8f6449-xvnkn\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:41.569635 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.569600 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/514bdb74-73a0-4a62-a268-22a2bb73d08c-proxy-tls\") pod \"error-404-isvc-d864a-predictor-f8b8f6449-xvnkn\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:41.572138 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.572116 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/514bdb74-73a0-4a62-a268-22a2bb73d08c-proxy-tls\") pod \"error-404-isvc-d864a-predictor-f8b8f6449-xvnkn\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:41.661775 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.661736 2566 generic.go:358] "Generic (PLEG): container finished" podID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerID="b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686" exitCode=2 Apr 23 18:12:41.661944 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.661802 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" event={"ID":"1e3a4970-c2f9-4767-9862-9e8cc797ccc3","Type":"ContainerDied","Data":"b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686"} Apr 23 18:12:41.746038 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.746003 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:41.881253 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:41.881228 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn"] Apr 23 18:12:41.883805 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:12:41.883780 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod514bdb74_73a0_4a62_a268_22a2bb73d08c.slice/crio-5cb9f2c67d0f058dbf5edcbca14ec76b115fe4a516a398958de4af76b13eaa26 WatchSource:0}: Error finding container 5cb9f2c67d0f058dbf5edcbca14ec76b115fe4a516a398958de4af76b13eaa26: Status 404 returned error can't find the container with id 5cb9f2c67d0f058dbf5edcbca14ec76b115fe4a516a398958de4af76b13eaa26 Apr 23 18:12:42.667098 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:42.667056 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" event={"ID":"514bdb74-73a0-4a62-a268-22a2bb73d08c","Type":"ContainerStarted","Data":"06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83"} Apr 23 18:12:42.667098 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:42.667100 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" event={"ID":"514bdb74-73a0-4a62-a268-22a2bb73d08c","Type":"ContainerStarted","Data":"bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239"} Apr 23 18:12:42.667404 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:42.667114 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" event={"ID":"514bdb74-73a0-4a62-a268-22a2bb73d08c","Type":"ContainerStarted","Data":"5cb9f2c67d0f058dbf5edcbca14ec76b115fe4a516a398958de4af76b13eaa26"} Apr 23 18:12:42.667404 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:42.667209 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:42.685723 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:42.685667 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" podStartSLOduration=2.68565026 podStartE2EDuration="2.68565026s" podCreationTimestamp="2026-04-23 18:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:12:42.684338329 +0000 UTC m=+1221.979616922" watchObservedRunningTime="2026-04-23 18:12:42.68565026 +0000 UTC m=+1221.980928853" Apr 23 18:12:43.670880 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:43.670847 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:43.672231 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:43.672204 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 23 18:12:44.236592 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.236560 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:12:44.395775 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.395741 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lgxp\" (UniqueName: \"kubernetes.io/projected/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-kube-api-access-9lgxp\") pod \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " Apr 23 18:12:44.395934 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.395847 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"error-404-isvc-0065f-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-error-404-isvc-0065f-kube-rbac-proxy-sar-config\") pod \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " Apr 23 18:12:44.396000 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.395958 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-proxy-tls\") pod \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\" (UID: \"1e3a4970-c2f9-4767-9862-9e8cc797ccc3\") " Apr 23 18:12:44.396145 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.396124 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-error-404-isvc-0065f-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "error-404-isvc-0065f-kube-rbac-proxy-sar-config") pod "1e3a4970-c2f9-4767-9862-9e8cc797ccc3" (UID: "1e3a4970-c2f9-4767-9862-9e8cc797ccc3"). InnerVolumeSpecName "error-404-isvc-0065f-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:12:44.397997 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.397963 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-kube-api-access-9lgxp" (OuterVolumeSpecName: "kube-api-access-9lgxp") pod "1e3a4970-c2f9-4767-9862-9e8cc797ccc3" (UID: "1e3a4970-c2f9-4767-9862-9e8cc797ccc3"). InnerVolumeSpecName "kube-api-access-9lgxp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:12:44.398104 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.398084 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "1e3a4970-c2f9-4767-9862-9e8cc797ccc3" (UID: "1e3a4970-c2f9-4767-9862-9e8cc797ccc3"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:12:44.496906 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.496866 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:12:44.496906 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.496900 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9lgxp\" (UniqueName: \"kubernetes.io/projected/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-kube-api-access-9lgxp\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:12:44.496906 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.496911 2566 reconciler_common.go:299] "Volume detached for volume \"error-404-isvc-0065f-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/1e3a4970-c2f9-4767-9862-9e8cc797ccc3-error-404-isvc-0065f-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:12:44.676211 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.676117 2566 generic.go:358] "Generic (PLEG): container finished" podID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerID="0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be" exitCode=0 Apr 23 18:12:44.676211 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.676196 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" Apr 23 18:12:44.676689 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.676194 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" event={"ID":"1e3a4970-c2f9-4767-9862-9e8cc797ccc3","Type":"ContainerDied","Data":"0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be"} Apr 23 18:12:44.676689 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.676330 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf" event={"ID":"1e3a4970-c2f9-4767-9862-9e8cc797ccc3","Type":"ContainerDied","Data":"bef4bc8cce08b368ca67818d6cf6b1c6ac5110003e19deb56ef981b566155209"} Apr 23 18:12:44.676689 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.676355 2566 scope.go:117] "RemoveContainer" containerID="b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686" Apr 23 18:12:44.676898 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.676873 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 23 18:12:44.685462 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.685438 2566 scope.go:117] "RemoveContainer" containerID="0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be" Apr 23 18:12:44.693548 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.693528 2566 scope.go:117] "RemoveContainer" containerID="b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686" Apr 23 18:12:44.693839 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:12:44.693819 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686\": container with ID starting with b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686 not found: ID does not exist" containerID="b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686" Apr 23 18:12:44.693922 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.693852 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686"} err="failed to get container status \"b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686\": rpc error: code = NotFound desc = could not find container \"b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686\": container with ID starting with b8c924331009b6dab4bdd098258213eef030a78baac7bc16dd0445a859756686 not found: ID does not exist" Apr 23 18:12:44.693922 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.693878 2566 scope.go:117] "RemoveContainer" containerID="0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be" Apr 23 18:12:44.694144 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:12:44.694122 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be\": container with ID starting with 0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be not found: ID does not exist" containerID="0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be" Apr 23 18:12:44.694276 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.694146 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be"} err="failed to get container status \"0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be\": rpc error: code = NotFound desc = could not find container \"0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be\": container with ID starting with 0ebe78228d8e06b1c19ad8f58e373c2f764fb500a4aa65aa1fa922536c5025be not found: ID does not exist" Apr 23 18:12:44.699028 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.699003 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf"] Apr 23 18:12:44.708699 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:44.708678 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0065f-predictor-585bf4fc7-ff7tf"] Apr 23 18:12:45.254384 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:45.254350 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" path="/var/lib/kubelet/pods/1e3a4970-c2f9-4767-9862-9e8cc797ccc3/volumes" Apr 23 18:12:49.534380 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:49.534351 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:12:49.681961 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:49.681934 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:12:49.682346 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:49.682317 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 23 18:12:59.682947 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:12:59.682906 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 23 18:13:09.683324 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:09.683226 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 23 18:13:11.038287 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.038250 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh"] Apr 23 18:13:11.038743 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.038577 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kserve-container" containerID="cri-o://9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b" gracePeriod=30 Apr 23 18:13:11.038743 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.038611 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kube-rbac-proxy" containerID="cri-o://628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb" gracePeriod=30 Apr 23 18:13:11.124558 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.121080 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw"] Apr 23 18:13:11.124558 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.122039 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kube-rbac-proxy" Apr 23 18:13:11.124558 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.122059 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kube-rbac-proxy" Apr 23 18:13:11.124558 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.122086 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kserve-container" Apr 23 18:13:11.124558 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.122094 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kserve-container" Apr 23 18:13:11.124558 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.122291 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kserve-container" Apr 23 18:13:11.124558 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.122326 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="1e3a4970-c2f9-4767-9862-9e8cc797ccc3" containerName="kube-rbac-proxy" Apr 23 18:13:11.126889 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.126861 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.129066 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.129030 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-5c6fb-predictor-serving-cert\"" Apr 23 18:13:11.129205 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.129034 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\"" Apr 23 18:13:11.133875 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.133819 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw"] Apr 23 18:13:11.214480 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.214438 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-5c6fb-predictor-7b8548d59-n54tw\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.214679 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.214501 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-proxy-tls\") pod \"error-404-isvc-5c6fb-predictor-7b8548d59-n54tw\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.214679 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.214554 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqvg\" (UniqueName: \"kubernetes.io/projected/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-kube-api-access-gpqvg\") pod \"error-404-isvc-5c6fb-predictor-7b8548d59-n54tw\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.315078 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.314988 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-proxy-tls\") pod \"error-404-isvc-5c6fb-predictor-7b8548d59-n54tw\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.315078 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.315038 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gpqvg\" (UniqueName: \"kubernetes.io/projected/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-kube-api-access-gpqvg\") pod \"error-404-isvc-5c6fb-predictor-7b8548d59-n54tw\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.315341 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.315133 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-5c6fb-predictor-7b8548d59-n54tw\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.315805 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.315778 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-5c6fb-predictor-7b8548d59-n54tw\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.317783 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.317745 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-proxy-tls\") pod \"error-404-isvc-5c6fb-predictor-7b8548d59-n54tw\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.323599 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.323573 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpqvg\" (UniqueName: \"kubernetes.io/projected/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-kube-api-access-gpqvg\") pod \"error-404-isvc-5c6fb-predictor-7b8548d59-n54tw\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.439567 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.439527 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.572385 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.572359 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw"] Apr 23 18:13:11.575006 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:13:11.574977 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d12a6f2_06cb_4a1e_92bd_d77fd200a7d6.slice/crio-ae34dd4a1e2438e4ff95e5b103be46d4540bd34b8f2c549b248235dc9d143e36 WatchSource:0}: Error finding container ae34dd4a1e2438e4ff95e5b103be46d4540bd34b8f2c549b248235dc9d143e36: Status 404 returned error can't find the container with id ae34dd4a1e2438e4ff95e5b103be46d4540bd34b8f2c549b248235dc9d143e36 Apr 23 18:13:11.788738 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.788638 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" event={"ID":"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6","Type":"ContainerStarted","Data":"3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca"} Apr 23 18:13:11.788738 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.788684 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" event={"ID":"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6","Type":"ContainerStarted","Data":"483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8"} Apr 23 18:13:11.788738 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.788703 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" event={"ID":"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6","Type":"ContainerStarted","Data":"ae34dd4a1e2438e4ff95e5b103be46d4540bd34b8f2c549b248235dc9d143e36"} Apr 23 18:13:11.789028 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.788942 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:11.790630 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.790599 2566 generic.go:358] "Generic (PLEG): container finished" podID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerID="628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb" exitCode=2 Apr 23 18:13:11.790772 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.790641 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" event={"ID":"e3581511-69fd-45a6-872b-b4273dc7d9be","Type":"ContainerDied","Data":"628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb"} Apr 23 18:13:11.809669 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:11.809606 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podStartSLOduration=0.809590176 podStartE2EDuration="809.590176ms" podCreationTimestamp="2026-04-23 18:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:13:11.807034048 +0000 UTC m=+1251.102312645" watchObservedRunningTime="2026-04-23 18:13:11.809590176 +0000 UTC m=+1251.104868773" Apr 23 18:13:12.795495 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:12.795460 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:12.796611 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:12.796578 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 23 18:13:13.798748 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:13.798714 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 23 18:13:14.289488 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.289464 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:13:14.337835 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.337758 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzn8f\" (UniqueName: \"kubernetes.io/projected/e3581511-69fd-45a6-872b-b4273dc7d9be-kube-api-access-hzn8f\") pod \"e3581511-69fd-45a6-872b-b4273dc7d9be\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " Apr 23 18:13:14.337835 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.337801 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3581511-69fd-45a6-872b-b4273dc7d9be-proxy-tls\") pod \"e3581511-69fd-45a6-872b-b4273dc7d9be\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " Apr 23 18:13:14.338057 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.337842 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"error-404-isvc-226e9-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/e3581511-69fd-45a6-872b-b4273dc7d9be-error-404-isvc-226e9-kube-rbac-proxy-sar-config\") pod \"e3581511-69fd-45a6-872b-b4273dc7d9be\" (UID: \"e3581511-69fd-45a6-872b-b4273dc7d9be\") " Apr 23 18:13:14.338279 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.338253 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3581511-69fd-45a6-872b-b4273dc7d9be-error-404-isvc-226e9-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "error-404-isvc-226e9-kube-rbac-proxy-sar-config") pod "e3581511-69fd-45a6-872b-b4273dc7d9be" (UID: "e3581511-69fd-45a6-872b-b4273dc7d9be"). InnerVolumeSpecName "error-404-isvc-226e9-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:13:14.340107 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.340087 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3581511-69fd-45a6-872b-b4273dc7d9be-kube-api-access-hzn8f" (OuterVolumeSpecName: "kube-api-access-hzn8f") pod "e3581511-69fd-45a6-872b-b4273dc7d9be" (UID: "e3581511-69fd-45a6-872b-b4273dc7d9be"). InnerVolumeSpecName "kube-api-access-hzn8f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:13:14.340215 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.340109 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3581511-69fd-45a6-872b-b4273dc7d9be-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e3581511-69fd-45a6-872b-b4273dc7d9be" (UID: "e3581511-69fd-45a6-872b-b4273dc7d9be"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:13:14.438393 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.438361 2566 reconciler_common.go:299] "Volume detached for volume \"error-404-isvc-226e9-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/e3581511-69fd-45a6-872b-b4273dc7d9be-error-404-isvc-226e9-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:13:14.438393 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.438391 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hzn8f\" (UniqueName: \"kubernetes.io/projected/e3581511-69fd-45a6-872b-b4273dc7d9be-kube-api-access-hzn8f\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:13:14.438393 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.438403 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3581511-69fd-45a6-872b-b4273dc7d9be-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:13:14.803235 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.803200 2566 generic.go:358] "Generic (PLEG): container finished" podID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerID="9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b" exitCode=0 Apr 23 18:13:14.803641 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.803282 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" Apr 23 18:13:14.803641 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.803275 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" event={"ID":"e3581511-69fd-45a6-872b-b4273dc7d9be","Type":"ContainerDied","Data":"9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b"} Apr 23 18:13:14.803641 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.803340 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh" event={"ID":"e3581511-69fd-45a6-872b-b4273dc7d9be","Type":"ContainerDied","Data":"0bc8c1985321843d0807cb6508813f6d0f7353a17dc1d8938dd453cc8cb4a6a2"} Apr 23 18:13:14.803641 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.803355 2566 scope.go:117] "RemoveContainer" containerID="628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb" Apr 23 18:13:14.814798 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.814786 2566 scope.go:117] "RemoveContainer" containerID="9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b" Apr 23 18:13:14.822549 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.822521 2566 scope.go:117] "RemoveContainer" containerID="628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb" Apr 23 18:13:14.822784 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:13:14.822766 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb\": container with ID starting with 628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb not found: ID does not exist" containerID="628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb" Apr 23 18:13:14.822835 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.822796 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb"} err="failed to get container status \"628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb\": rpc error: code = NotFound desc = could not find container \"628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb\": container with ID starting with 628e6ac11629a98a9446278cdc8b7740c00b2177715715662e27cfcb21b8e7bb not found: ID does not exist" Apr 23 18:13:14.822835 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.822814 2566 scope.go:117] "RemoveContainer" containerID="9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b" Apr 23 18:13:14.823016 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:13:14.823001 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b\": container with ID starting with 9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b not found: ID does not exist" containerID="9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b" Apr 23 18:13:14.823059 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.823022 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b"} err="failed to get container status \"9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b\": rpc error: code = NotFound desc = could not find container \"9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b\": container with ID starting with 9c24b41e9a560023ddd273f4c3254cbf6566ef32395d78d5a53712255577081b not found: ID does not exist" Apr 23 18:13:14.828829 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.828794 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh"] Apr 23 18:13:14.834221 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:14.834195 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-226e9-predictor-76db5f4c75-z7mjh"] Apr 23 18:13:15.254774 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:15.254691 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" path="/var/lib/kubelet/pods/e3581511-69fd-45a6-872b-b4273dc7d9be/volumes" Apr 23 18:13:18.802790 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:18.802763 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:13:18.803252 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:18.803227 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 23 18:13:19.682789 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:19.682747 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.40:8080: connect: connection refused" Apr 23 18:13:28.803375 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:28.803337 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 23 18:13:29.683094 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:29.683060 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:13:38.803748 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:38.803702 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 23 18:13:48.803711 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:48.803664 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 23 18:13:58.804483 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:13:58.804452 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:14:01.018286 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.018246 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn"] Apr 23 18:14:01.018712 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.018539 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kserve-container" containerID="cri-o://bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239" gracePeriod=30 Apr 23 18:14:01.018712 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.018589 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kube-rbac-proxy" containerID="cri-o://06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83" gracePeriod=30 Apr 23 18:14:01.074591 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.074556 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj"] Apr 23 18:14:01.075082 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.075061 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kserve-container" Apr 23 18:14:01.075082 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.075081 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kserve-container" Apr 23 18:14:01.075216 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.075098 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kube-rbac-proxy" Apr 23 18:14:01.075216 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.075104 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kube-rbac-proxy" Apr 23 18:14:01.075216 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.075185 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kserve-container" Apr 23 18:14:01.075216 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.075194 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3581511-69fd-45a6-872b-b4273dc7d9be" containerName="kube-rbac-proxy" Apr 23 18:14:01.078610 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.078595 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.080514 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.080492 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-e1fac-kube-rbac-proxy-sar-config\"" Apr 23 18:14:01.080625 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.080537 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-e1fac-predictor-serving-cert\"" Apr 23 18:14:01.089680 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.089657 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj"] Apr 23 18:14:01.257396 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.257363 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4368b22b-f54e-44eb-8dcf-c3117cceb717-proxy-tls\") pod \"error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.257597 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.257412 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"error-404-isvc-e1fac-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4368b22b-f54e-44eb-8dcf-c3117cceb717-error-404-isvc-e1fac-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.257597 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.257537 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtdlq\" (UniqueName: \"kubernetes.io/projected/4368b22b-f54e-44eb-8dcf-c3117cceb717-kube-api-access-jtdlq\") pod \"error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.358456 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.358424 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4368b22b-f54e-44eb-8dcf-c3117cceb717-proxy-tls\") pod \"error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.358637 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.358482 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"error-404-isvc-e1fac-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4368b22b-f54e-44eb-8dcf-c3117cceb717-error-404-isvc-e1fac-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.358637 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.358544 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtdlq\" (UniqueName: \"kubernetes.io/projected/4368b22b-f54e-44eb-8dcf-c3117cceb717-kube-api-access-jtdlq\") pod \"error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.358637 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:14:01.358570 2566 secret.go:189] Couldn't get secret kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-serving-cert: secret "error-404-isvc-e1fac-predictor-serving-cert" not found Apr 23 18:14:01.358825 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:14:01.358640 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4368b22b-f54e-44eb-8dcf-c3117cceb717-proxy-tls podName:4368b22b-f54e-44eb-8dcf-c3117cceb717 nodeName:}" failed. No retries permitted until 2026-04-23 18:14:01.858619061 +0000 UTC m=+1301.153897637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4368b22b-f54e-44eb-8dcf-c3117cceb717-proxy-tls") pod "error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" (UID: "4368b22b-f54e-44eb-8dcf-c3117cceb717") : secret "error-404-isvc-e1fac-predictor-serving-cert" not found Apr 23 18:14:01.359145 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.359122 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"error-404-isvc-e1fac-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4368b22b-f54e-44eb-8dcf-c3117cceb717-error-404-isvc-e1fac-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.367961 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.367935 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtdlq\" (UniqueName: \"kubernetes.io/projected/4368b22b-f54e-44eb-8dcf-c3117cceb717-kube-api-access-jtdlq\") pod \"error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.862497 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.862461 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4368b22b-f54e-44eb-8dcf-c3117cceb717-proxy-tls\") pod \"error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.865029 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.865001 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4368b22b-f54e-44eb-8dcf-c3117cceb717-proxy-tls\") pod \"error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:01.973175 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.973140 2566 generic.go:358] "Generic (PLEG): container finished" podID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerID="06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83" exitCode=2 Apr 23 18:14:01.973357 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.973210 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" event={"ID":"514bdb74-73a0-4a62-a268-22a2bb73d08c","Type":"ContainerDied","Data":"06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83"} Apr 23 18:14:01.990476 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:01.990450 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:02.121671 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:02.121645 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj"] Apr 23 18:14:02.123972 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:14:02.123945 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4368b22b_f54e_44eb_8dcf_c3117cceb717.slice/crio-279ba070e8cb93318ba8154acdf0dbfc5c0c5d2276f6934612bcbcdf4c85f1d4 WatchSource:0}: Error finding container 279ba070e8cb93318ba8154acdf0dbfc5c0c5d2276f6934612bcbcdf4c85f1d4: Status 404 returned error can't find the container with id 279ba070e8cb93318ba8154acdf0dbfc5c0c5d2276f6934612bcbcdf4c85f1d4 Apr 23 18:14:02.979364 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:02.979330 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" event={"ID":"4368b22b-f54e-44eb-8dcf-c3117cceb717","Type":"ContainerStarted","Data":"4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4"} Apr 23 18:14:02.979364 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:02.979367 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" event={"ID":"4368b22b-f54e-44eb-8dcf-c3117cceb717","Type":"ContainerStarted","Data":"2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab"} Apr 23 18:14:02.979584 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:02.979378 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" event={"ID":"4368b22b-f54e-44eb-8dcf-c3117cceb717","Type":"ContainerStarted","Data":"279ba070e8cb93318ba8154acdf0dbfc5c0c5d2276f6934612bcbcdf4c85f1d4"} Apr 23 18:14:02.979584 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:02.979476 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:02.998847 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:02.998781 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" podStartSLOduration=1.998767985 podStartE2EDuration="1.998767985s" podCreationTimestamp="2026-04-23 18:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:14:02.997369001 +0000 UTC m=+1302.292647595" watchObservedRunningTime="2026-04-23 18:14:02.998767985 +0000 UTC m=+1302.294046582" Apr 23 18:14:03.983805 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:03.983765 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:03.985255 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:03.985224 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 23 18:14:04.677898 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:04.677846 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.133.0.40:8643/healthz\": dial tcp 10.133.0.40:8643: connect: connection refused" Apr 23 18:14:04.987093 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:04.987002 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 23 18:14:07.963546 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:07.963521 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:14:08.000238 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.000207 2566 generic.go:358] "Generic (PLEG): container finished" podID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerID="bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239" exitCode=0 Apr 23 18:14:08.000408 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.000266 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" event={"ID":"514bdb74-73a0-4a62-a268-22a2bb73d08c","Type":"ContainerDied","Data":"bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239"} Apr 23 18:14:08.000408 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.000277 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" Apr 23 18:14:08.000408 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.000296 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn" event={"ID":"514bdb74-73a0-4a62-a268-22a2bb73d08c","Type":"ContainerDied","Data":"5cb9f2c67d0f058dbf5edcbca14ec76b115fe4a516a398958de4af76b13eaa26"} Apr 23 18:14:08.000408 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.000351 2566 scope.go:117] "RemoveContainer" containerID="06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83" Apr 23 18:14:08.009242 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.009223 2566 scope.go:117] "RemoveContainer" containerID="bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239" Apr 23 18:14:08.016918 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.016901 2566 scope.go:117] "RemoveContainer" containerID="06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83" Apr 23 18:14:08.017154 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:14:08.017132 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83\": container with ID starting with 06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83 not found: ID does not exist" containerID="06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83" Apr 23 18:14:08.017200 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.017164 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83"} err="failed to get container status \"06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83\": rpc error: code = NotFound desc = could not find container \"06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83\": container with ID starting with 06fbcc56871d88e5a8cbbef6bbfa7323c11e434e08f0849ed2f137074edfaa83 not found: ID does not exist" Apr 23 18:14:08.017200 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.017182 2566 scope.go:117] "RemoveContainer" containerID="bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239" Apr 23 18:14:08.017436 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:14:08.017416 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239\": container with ID starting with bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239 not found: ID does not exist" containerID="bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239" Apr 23 18:14:08.017492 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.017447 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239"} err="failed to get container status \"bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239\": rpc error: code = NotFound desc = could not find container \"bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239\": container with ID starting with bbfe11bc945b00bd54089cba351936da6c36930b4ce38be33b7402acb538b239 not found: ID does not exist" Apr 23 18:14:08.018652 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.018637 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zxx6\" (UniqueName: \"kubernetes.io/projected/514bdb74-73a0-4a62-a268-22a2bb73d08c-kube-api-access-2zxx6\") pod \"514bdb74-73a0-4a62-a268-22a2bb73d08c\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " Apr 23 18:14:08.018707 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.018699 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/514bdb74-73a0-4a62-a268-22a2bb73d08c-proxy-tls\") pod \"514bdb74-73a0-4a62-a268-22a2bb73d08c\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " Apr 23 18:14:08.018755 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.018743 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"error-404-isvc-d864a-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/514bdb74-73a0-4a62-a268-22a2bb73d08c-error-404-isvc-d864a-kube-rbac-proxy-sar-config\") pod \"514bdb74-73a0-4a62-a268-22a2bb73d08c\" (UID: \"514bdb74-73a0-4a62-a268-22a2bb73d08c\") " Apr 23 18:14:08.019118 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.019096 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/514bdb74-73a0-4a62-a268-22a2bb73d08c-error-404-isvc-d864a-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "error-404-isvc-d864a-kube-rbac-proxy-sar-config") pod "514bdb74-73a0-4a62-a268-22a2bb73d08c" (UID: "514bdb74-73a0-4a62-a268-22a2bb73d08c"). InnerVolumeSpecName "error-404-isvc-d864a-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:14:08.020762 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.020738 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514bdb74-73a0-4a62-a268-22a2bb73d08c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "514bdb74-73a0-4a62-a268-22a2bb73d08c" (UID: "514bdb74-73a0-4a62-a268-22a2bb73d08c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:14:08.020838 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.020815 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514bdb74-73a0-4a62-a268-22a2bb73d08c-kube-api-access-2zxx6" (OuterVolumeSpecName: "kube-api-access-2zxx6") pod "514bdb74-73a0-4a62-a268-22a2bb73d08c" (UID: "514bdb74-73a0-4a62-a268-22a2bb73d08c"). InnerVolumeSpecName "kube-api-access-2zxx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:14:08.119892 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.119808 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2zxx6\" (UniqueName: \"kubernetes.io/projected/514bdb74-73a0-4a62-a268-22a2bb73d08c-kube-api-access-2zxx6\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:14:08.119892 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.119836 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/514bdb74-73a0-4a62-a268-22a2bb73d08c-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:14:08.119892 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.119847 2566 reconciler_common.go:299] "Volume detached for volume \"error-404-isvc-d864a-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/514bdb74-73a0-4a62-a268-22a2bb73d08c-error-404-isvc-d864a-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:14:08.322895 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.322862 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn"] Apr 23 18:14:08.328951 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:08.328926 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d864a-predictor-f8b8f6449-xvnkn"] Apr 23 18:14:09.255266 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:09.255228 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" path="/var/lib/kubelet/pods/514bdb74-73a0-4a62-a268-22a2bb73d08c/volumes" Apr 23 18:14:09.991369 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:09.991340 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:14:09.992007 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:09.991967 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 23 18:14:19.992169 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:19.992131 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 23 18:14:29.992478 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:29.992430 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 23 18:14:39.992290 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:39.992251 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 23 18:14:49.992461 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:14:49.992431 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:17:21.245110 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:17:21.245081 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:17:21.247775 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:17:21.247750 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:17:21.252086 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:17:21.252064 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:17:21.255223 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:17:21.255206 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:22:21.272251 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:21.272225 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:22:21.278083 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:21.278059 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:22:21.278686 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:21.278665 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:22:21.286797 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:21.286775 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:22:26.035887 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.035848 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw"] Apr 23 18:22:26.036271 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.036240 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" containerID="cri-o://483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8" gracePeriod=30 Apr 23 18:22:26.036377 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.036272 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kube-rbac-proxy" containerID="cri-o://3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca" gracePeriod=30 Apr 23 18:22:26.118276 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.118233 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq"] Apr 23 18:22:26.118694 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.118678 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kube-rbac-proxy" Apr 23 18:22:26.118765 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.118695 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kube-rbac-proxy" Apr 23 18:22:26.118765 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.118734 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kserve-container" Apr 23 18:22:26.118765 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.118743 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kserve-container" Apr 23 18:22:26.118928 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.118844 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kube-rbac-proxy" Apr 23 18:22:26.118928 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.118860 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="514bdb74-73a0-4a62-a268-22a2bb73d08c" containerName="kserve-container" Apr 23 18:22:26.122204 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.122186 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.124462 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.124445 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-a93f3-predictor-serving-cert\"" Apr 23 18:22:26.124560 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.124449 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-a93f3-kube-rbac-proxy-sar-config\"" Apr 23 18:22:26.132629 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.132603 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq"] Apr 23 18:22:26.208425 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.208394 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"error-404-isvc-a93f3-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/645360c6-749d-41eb-9e30-9ba98e4a59c6-error-404-isvc-a93f3-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-a93f3-predictor-6ff446966c-c2xqq\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.208561 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.208534 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/645360c6-749d-41eb-9e30-9ba98e4a59c6-proxy-tls\") pod \"error-404-isvc-a93f3-predictor-6ff446966c-c2xqq\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.208606 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.208585 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhzzw\" (UniqueName: \"kubernetes.io/projected/645360c6-749d-41eb-9e30-9ba98e4a59c6-kube-api-access-lhzzw\") pod \"error-404-isvc-a93f3-predictor-6ff446966c-c2xqq\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.310078 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.310047 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"error-404-isvc-a93f3-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/645360c6-749d-41eb-9e30-9ba98e4a59c6-error-404-isvc-a93f3-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-a93f3-predictor-6ff446966c-c2xqq\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.310274 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.310128 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/645360c6-749d-41eb-9e30-9ba98e4a59c6-proxy-tls\") pod \"error-404-isvc-a93f3-predictor-6ff446966c-c2xqq\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.310274 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.310156 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lhzzw\" (UniqueName: \"kubernetes.io/projected/645360c6-749d-41eb-9e30-9ba98e4a59c6-kube-api-access-lhzzw\") pod \"error-404-isvc-a93f3-predictor-6ff446966c-c2xqq\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.310772 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.310748 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"error-404-isvc-a93f3-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/645360c6-749d-41eb-9e30-9ba98e4a59c6-error-404-isvc-a93f3-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-a93f3-predictor-6ff446966c-c2xqq\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.313010 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.312981 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/645360c6-749d-41eb-9e30-9ba98e4a59c6-proxy-tls\") pod \"error-404-isvc-a93f3-predictor-6ff446966c-c2xqq\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.318522 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.318501 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhzzw\" (UniqueName: \"kubernetes.io/projected/645360c6-749d-41eb-9e30-9ba98e4a59c6-kube-api-access-lhzzw\") pod \"error-404-isvc-a93f3-predictor-6ff446966c-c2xqq\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.435758 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.435725 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.565009 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.564980 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq"] Apr 23 18:22:26.567600 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:22:26.567571 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod645360c6_749d_41eb_9e30_9ba98e4a59c6.slice/crio-7224fd1adeb1cfddd74a1772df26a2f3f5dc9844bcba280f38fc9080ddf7fa89 WatchSource:0}: Error finding container 7224fd1adeb1cfddd74a1772df26a2f3f5dc9844bcba280f38fc9080ddf7fa89: Status 404 returned error can't find the container with id 7224fd1adeb1cfddd74a1772df26a2f3f5dc9844bcba280f38fc9080ddf7fa89 Apr 23 18:22:26.569374 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.569358 2566 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:22:26.744168 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.744138 2566 generic.go:358] "Generic (PLEG): container finished" podID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerID="3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca" exitCode=2 Apr 23 18:22:26.744295 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.744198 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" event={"ID":"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6","Type":"ContainerDied","Data":"3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca"} Apr 23 18:22:26.745846 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.745821 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" event={"ID":"645360c6-749d-41eb-9e30-9ba98e4a59c6","Type":"ContainerStarted","Data":"c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b"} Apr 23 18:22:26.745940 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.745856 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" event={"ID":"645360c6-749d-41eb-9e30-9ba98e4a59c6","Type":"ContainerStarted","Data":"bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66"} Apr 23 18:22:26.745940 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.745869 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" event={"ID":"645360c6-749d-41eb-9e30-9ba98e4a59c6","Type":"ContainerStarted","Data":"7224fd1adeb1cfddd74a1772df26a2f3f5dc9844bcba280f38fc9080ddf7fa89"} Apr 23 18:22:26.746016 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.745965 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:26.762175 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:26.762125 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" podStartSLOduration=0.762113198 podStartE2EDuration="762.113198ms" podCreationTimestamp="2026-04-23 18:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:22:26.760873504 +0000 UTC m=+1806.056152101" watchObservedRunningTime="2026-04-23 18:22:26.762113198 +0000 UTC m=+1806.057391794" Apr 23 18:22:27.750328 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:27.750273 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:27.751787 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:27.751756 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 23 18:22:28.753897 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:28.753855 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 23 18:22:28.799814 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:28.799771 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.133.0.41:8643/healthz\": dial tcp 10.133.0.41:8643: connect: connection refused" Apr 23 18:22:28.804047 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:28.804019 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.41:8080: connect: connection refused" Apr 23 18:22:29.290718 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.290695 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:22:29.440129 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.440034 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\") pod \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " Apr 23 18:22:29.440129 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.440129 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpqvg\" (UniqueName: \"kubernetes.io/projected/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-kube-api-access-gpqvg\") pod \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " Apr 23 18:22:29.440397 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.440162 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-proxy-tls\") pod \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\" (UID: \"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6\") " Apr 23 18:22:29.440523 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.440495 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-error-404-isvc-5c6fb-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "error-404-isvc-5c6fb-kube-rbac-proxy-sar-config") pod "5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" (UID: "5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6"). InnerVolumeSpecName "error-404-isvc-5c6fb-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:22:29.442350 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.442325 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" (UID: "5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:22:29.442436 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.442419 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-kube-api-access-gpqvg" (OuterVolumeSpecName: "kube-api-access-gpqvg") pod "5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" (UID: "5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6"). InnerVolumeSpecName "kube-api-access-gpqvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:22:29.540715 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.540687 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gpqvg\" (UniqueName: \"kubernetes.io/projected/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-kube-api-access-gpqvg\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:22:29.540715 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.540712 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:22:29.540715 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.540723 2566 reconciler_common.go:299] "Volume detached for volume \"error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6-error-404-isvc-5c6fb-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:22:29.759266 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.759166 2566 generic.go:358] "Generic (PLEG): container finished" podID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerID="483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8" exitCode=0 Apr 23 18:22:29.759266 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.759228 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" event={"ID":"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6","Type":"ContainerDied","Data":"483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8"} Apr 23 18:22:29.759266 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.759248 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" Apr 23 18:22:29.759266 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.759262 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw" event={"ID":"5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6","Type":"ContainerDied","Data":"ae34dd4a1e2438e4ff95e5b103be46d4540bd34b8f2c549b248235dc9d143e36"} Apr 23 18:22:29.759818 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.759282 2566 scope.go:117] "RemoveContainer" containerID="3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca" Apr 23 18:22:29.768908 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.768888 2566 scope.go:117] "RemoveContainer" containerID="483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8" Apr 23 18:22:29.776384 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.776368 2566 scope.go:117] "RemoveContainer" containerID="3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca" Apr 23 18:22:29.776637 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:22:29.776616 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca\": container with ID starting with 3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca not found: ID does not exist" containerID="3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca" Apr 23 18:22:29.776694 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.776646 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca"} err="failed to get container status \"3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca\": rpc error: code = NotFound desc = could not find container \"3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca\": container with ID starting with 3b5f9986ead7f2e924d4dd53f2a4114aeb324d7ca55bf60a0346edf037ba23ca not found: ID does not exist" Apr 23 18:22:29.776694 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.776662 2566 scope.go:117] "RemoveContainer" containerID="483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8" Apr 23 18:22:29.776894 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:22:29.776874 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8\": container with ID starting with 483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8 not found: ID does not exist" containerID="483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8" Apr 23 18:22:29.776951 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.776905 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8"} err="failed to get container status \"483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8\": rpc error: code = NotFound desc = could not find container \"483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8\": container with ID starting with 483eaa952ad05f98bf7cec116e07a7867c2b0806b7268de9d85619074c404ed8 not found: ID does not exist" Apr 23 18:22:29.781245 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.781224 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw"] Apr 23 18:22:29.785579 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:29.785561 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5c6fb-predictor-7b8548d59-n54tw"] Apr 23 18:22:31.255172 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:31.255140 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" path="/var/lib/kubelet/pods/5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6/volumes" Apr 23 18:22:33.758261 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:33.758234 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:22:33.758723 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:33.758698 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 23 18:22:43.758971 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:43.758927 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 23 18:22:53.759568 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:22:53.759526 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 23 18:23:03.759013 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:03.758973 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 23 18:23:13.760019 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:13.759987 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:23:15.853071 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.853040 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj"] Apr 23 18:23:15.853657 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.853356 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kserve-container" containerID="cri-o://2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab" gracePeriod=30 Apr 23 18:23:15.853657 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.853376 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kube-rbac-proxy" containerID="cri-o://4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4" gracePeriod=30 Apr 23 18:23:15.925752 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.925715 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt"] Apr 23 18:23:15.926285 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.926270 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" Apr 23 18:23:15.926368 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.926288 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" Apr 23 18:23:15.926368 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.926365 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kube-rbac-proxy" Apr 23 18:23:15.926436 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.926373 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kube-rbac-proxy" Apr 23 18:23:15.926477 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.926465 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kserve-container" Apr 23 18:23:15.926517 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.926481 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d12a6f2-06cb-4a1e-92bd-d77fd200a7d6" containerName="kube-rbac-proxy" Apr 23 18:23:15.930119 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.930102 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:15.932703 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.932671 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-87aee-predictor-serving-cert\"" Apr 23 18:23:15.932703 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.932681 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-87aee-kube-rbac-proxy-sar-config\"" Apr 23 18:23:15.940974 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:15.940949 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt"] Apr 23 18:23:16.022339 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.022270 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtk84\" (UniqueName: \"kubernetes.io/projected/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-kube-api-access-xtk84\") pod \"error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.022520 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.022391 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-proxy-tls\") pod \"error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.022520 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.022445 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"error-404-isvc-87aee-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-error-404-isvc-87aee-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.123464 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.123367 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk84\" (UniqueName: \"kubernetes.io/projected/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-kube-api-access-xtk84\") pod \"error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.123464 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.123415 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-proxy-tls\") pod \"error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.123704 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.123540 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"error-404-isvc-87aee-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-error-404-isvc-87aee-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.124185 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.124164 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"error-404-isvc-87aee-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-error-404-isvc-87aee-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.125973 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.125953 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-proxy-tls\") pod \"error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.131481 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.131460 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtk84\" (UniqueName: \"kubernetes.io/projected/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-kube-api-access-xtk84\") pod \"error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.243158 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.243126 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.368941 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.368910 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt"] Apr 23 18:23:16.372707 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:23:16.372680 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb47bb72_d086_43d1_9dce_8f0a51cbcc01.slice/crio-a12e88eb4fd69fcd886a12fd843f044b5bb83238b47e7a917c1484841bb09908 WatchSource:0}: Error finding container a12e88eb4fd69fcd886a12fd843f044b5bb83238b47e7a917c1484841bb09908: Status 404 returned error can't find the container with id a12e88eb4fd69fcd886a12fd843f044b5bb83238b47e7a917c1484841bb09908 Apr 23 18:23:16.929558 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.929525 2566 generic.go:358] "Generic (PLEG): container finished" podID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerID="4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4" exitCode=2 Apr 23 18:23:16.929986 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.929601 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" event={"ID":"4368b22b-f54e-44eb-8dcf-c3117cceb717","Type":"ContainerDied","Data":"4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4"} Apr 23 18:23:16.931227 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.931196 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" event={"ID":"eb47bb72-d086-43d1-9dce-8f0a51cbcc01","Type":"ContainerStarted","Data":"e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058"} Apr 23 18:23:16.931374 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.931231 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" event={"ID":"eb47bb72-d086-43d1-9dce-8f0a51cbcc01","Type":"ContainerStarted","Data":"5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f"} Apr 23 18:23:16.931374 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.931244 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" event={"ID":"eb47bb72-d086-43d1-9dce-8f0a51cbcc01","Type":"ContainerStarted","Data":"a12e88eb4fd69fcd886a12fd843f044b5bb83238b47e7a917c1484841bb09908"} Apr 23 18:23:16.931374 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.931328 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:16.947509 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:16.947470 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" podStartSLOduration=1.947455625 podStartE2EDuration="1.947455625s" podCreationTimestamp="2026-04-23 18:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:23:16.946886953 +0000 UTC m=+1856.242165550" watchObservedRunningTime="2026-04-23 18:23:16.947455625 +0000 UTC m=+1856.242734250" Apr 23 18:23:17.934686 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:17.934657 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:17.936063 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:17.936036 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 23 18:23:18.937823 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:18.937779 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 23 18:23:19.101539 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.101512 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:23:19.250060 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.249977 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"error-404-isvc-e1fac-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4368b22b-f54e-44eb-8dcf-c3117cceb717-error-404-isvc-e1fac-kube-rbac-proxy-sar-config\") pod \"4368b22b-f54e-44eb-8dcf-c3117cceb717\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " Apr 23 18:23:19.250060 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.250029 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtdlq\" (UniqueName: \"kubernetes.io/projected/4368b22b-f54e-44eb-8dcf-c3117cceb717-kube-api-access-jtdlq\") pod \"4368b22b-f54e-44eb-8dcf-c3117cceb717\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " Apr 23 18:23:19.250280 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.250100 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4368b22b-f54e-44eb-8dcf-c3117cceb717-proxy-tls\") pod \"4368b22b-f54e-44eb-8dcf-c3117cceb717\" (UID: \"4368b22b-f54e-44eb-8dcf-c3117cceb717\") " Apr 23 18:23:19.250380 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.250352 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4368b22b-f54e-44eb-8dcf-c3117cceb717-error-404-isvc-e1fac-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "error-404-isvc-e1fac-kube-rbac-proxy-sar-config") pod "4368b22b-f54e-44eb-8dcf-c3117cceb717" (UID: "4368b22b-f54e-44eb-8dcf-c3117cceb717"). InnerVolumeSpecName "error-404-isvc-e1fac-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:23:19.252326 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.252275 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4368b22b-f54e-44eb-8dcf-c3117cceb717-kube-api-access-jtdlq" (OuterVolumeSpecName: "kube-api-access-jtdlq") pod "4368b22b-f54e-44eb-8dcf-c3117cceb717" (UID: "4368b22b-f54e-44eb-8dcf-c3117cceb717"). InnerVolumeSpecName "kube-api-access-jtdlq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:23:19.252440 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.252389 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4368b22b-f54e-44eb-8dcf-c3117cceb717-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "4368b22b-f54e-44eb-8dcf-c3117cceb717" (UID: "4368b22b-f54e-44eb-8dcf-c3117cceb717"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:23:19.351085 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.351057 2566 reconciler_common.go:299] "Volume detached for volume \"error-404-isvc-e1fac-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4368b22b-f54e-44eb-8dcf-c3117cceb717-error-404-isvc-e1fac-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:23:19.351085 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.351084 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jtdlq\" (UniqueName: \"kubernetes.io/projected/4368b22b-f54e-44eb-8dcf-c3117cceb717-kube-api-access-jtdlq\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:23:19.351281 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.351096 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4368b22b-f54e-44eb-8dcf-c3117cceb717-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:23:19.942791 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.942750 2566 generic.go:358] "Generic (PLEG): container finished" podID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerID="2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab" exitCode=0 Apr 23 18:23:19.943262 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.942832 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" Apr 23 18:23:19.943262 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.942839 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" event={"ID":"4368b22b-f54e-44eb-8dcf-c3117cceb717","Type":"ContainerDied","Data":"2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab"} Apr 23 18:23:19.943262 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.942877 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj" event={"ID":"4368b22b-f54e-44eb-8dcf-c3117cceb717","Type":"ContainerDied","Data":"279ba070e8cb93318ba8154acdf0dbfc5c0c5d2276f6934612bcbcdf4c85f1d4"} Apr 23 18:23:19.943262 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.942894 2566 scope.go:117] "RemoveContainer" containerID="4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4" Apr 23 18:23:19.951436 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.951416 2566 scope.go:117] "RemoveContainer" containerID="2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab" Apr 23 18:23:19.958969 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.958951 2566 scope.go:117] "RemoveContainer" containerID="4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4" Apr 23 18:23:19.959194 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:23:19.959175 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4\": container with ID starting with 4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4 not found: ID does not exist" containerID="4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4" Apr 23 18:23:19.959248 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.959212 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4"} err="failed to get container status \"4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4\": rpc error: code = NotFound desc = could not find container \"4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4\": container with ID starting with 4d987659b313f72bae223a3063a6aedbf35e8f02cd6fd88850e0df9c3a4b2ac4 not found: ID does not exist" Apr 23 18:23:19.959248 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.959230 2566 scope.go:117] "RemoveContainer" containerID="2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab" Apr 23 18:23:19.959562 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:23:19.959545 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab\": container with ID starting with 2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab not found: ID does not exist" containerID="2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab" Apr 23 18:23:19.959617 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.959568 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab"} err="failed to get container status \"2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab\": rpc error: code = NotFound desc = could not find container \"2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab\": container with ID starting with 2456b680fc3e2cec0376b35acf49dbccbd09e155ee4c260506a44711198152ab not found: ID does not exist" Apr 23 18:23:19.962074 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.962055 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj"] Apr 23 18:23:19.970277 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:19.970259 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-e1fac-predictor-779fdf88dc-7mrqj"] Apr 23 18:23:21.255363 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:21.255329 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" path="/var/lib/kubelet/pods/4368b22b-f54e-44eb-8dcf-c3117cceb717/volumes" Apr 23 18:23:23.942922 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:23.942892 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:23:23.943475 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:23.943444 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 23 18:23:33.943574 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:33.943485 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 23 18:23:36.219703 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.219599 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq"] Apr 23 18:23:36.220162 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.219986 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kserve-container" containerID="cri-o://bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66" gracePeriod=30 Apr 23 18:23:36.220162 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.220027 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kube-rbac-proxy" containerID="cri-o://c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b" gracePeriod=30 Apr 23 18:23:36.408513 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.408478 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4"] Apr 23 18:23:36.408963 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.408948 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kube-rbac-proxy" Apr 23 18:23:36.409013 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.408965 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kube-rbac-proxy" Apr 23 18:23:36.409013 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.408993 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kserve-container" Apr 23 18:23:36.409013 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.409002 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kserve-container" Apr 23 18:23:36.409115 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.409069 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kube-rbac-proxy" Apr 23 18:23:36.409115 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.409086 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="4368b22b-f54e-44eb-8dcf-c3117cceb717" containerName="kserve-container" Apr 23 18:23:36.412467 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.412451 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:36.414257 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.414234 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-7efef-predictor-serving-cert\"" Apr 23 18:23:36.414370 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.414276 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"error-404-isvc-7efef-kube-rbac-proxy-sar-config\"" Apr 23 18:23:36.419950 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.419924 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4"] Apr 23 18:23:36.504066 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.503957 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"error-404-isvc-7efef-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-error-404-isvc-7efef-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-7efef-predictor-86c486dbf-v2nw4\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:36.504066 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.504025 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-proxy-tls\") pod \"error-404-isvc-7efef-predictor-86c486dbf-v2nw4\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:36.504270 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.504176 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmkr\" (UniqueName: \"kubernetes.io/projected/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-kube-api-access-zdmkr\") pod \"error-404-isvc-7efef-predictor-86c486dbf-v2nw4\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:36.604981 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.604950 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"error-404-isvc-7efef-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-error-404-isvc-7efef-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-7efef-predictor-86c486dbf-v2nw4\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:36.605172 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.604992 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-proxy-tls\") pod \"error-404-isvc-7efef-predictor-86c486dbf-v2nw4\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:36.605172 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.605116 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zdmkr\" (UniqueName: \"kubernetes.io/projected/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-kube-api-access-zdmkr\") pod \"error-404-isvc-7efef-predictor-86c486dbf-v2nw4\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:36.605290 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:23:36.605170 2566 secret.go:189] Couldn't get secret kserve-ci-e2e-test/error-404-isvc-7efef-predictor-serving-cert: secret "error-404-isvc-7efef-predictor-serving-cert" not found Apr 23 18:23:36.605290 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:23:36.605262 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-proxy-tls podName:1bc0f098-a86d-4d55-9c5e-1a36edbda04f nodeName:}" failed. No retries permitted until 2026-04-23 18:23:37.105244224 +0000 UTC m=+1876.400522808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-proxy-tls") pod "error-404-isvc-7efef-predictor-86c486dbf-v2nw4" (UID: "1bc0f098-a86d-4d55-9c5e-1a36edbda04f") : secret "error-404-isvc-7efef-predictor-serving-cert" not found Apr 23 18:23:36.605738 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.605718 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"error-404-isvc-7efef-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-error-404-isvc-7efef-kube-rbac-proxy-sar-config\") pod \"error-404-isvc-7efef-predictor-86c486dbf-v2nw4\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:36.613824 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:36.613801 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdmkr\" (UniqueName: \"kubernetes.io/projected/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-kube-api-access-zdmkr\") pod \"error-404-isvc-7efef-predictor-86c486dbf-v2nw4\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:37.010187 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:37.010152 2566 generic.go:358] "Generic (PLEG): container finished" podID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerID="c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b" exitCode=2 Apr 23 18:23:37.010348 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:37.010192 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" event={"ID":"645360c6-749d-41eb-9e30-9ba98e4a59c6","Type":"ContainerDied","Data":"c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b"} Apr 23 18:23:37.109119 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:37.109080 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-proxy-tls\") pod \"error-404-isvc-7efef-predictor-86c486dbf-v2nw4\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:37.111590 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:37.111559 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-proxy-tls\") pod \"error-404-isvc-7efef-predictor-86c486dbf-v2nw4\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:37.325152 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:37.325114 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:37.452509 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:37.452484 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4"] Apr 23 18:23:37.454479 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:23:37.454437 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bc0f098_a86d_4d55_9c5e_1a36edbda04f.slice/crio-7d1ed9f9d7e37c4cad3e8b81dd1c58dec1c09114261df2cafcbe1a6bc9cd3a06 WatchSource:0}: Error finding container 7d1ed9f9d7e37c4cad3e8b81dd1c58dec1c09114261df2cafcbe1a6bc9cd3a06: Status 404 returned error can't find the container with id 7d1ed9f9d7e37c4cad3e8b81dd1c58dec1c09114261df2cafcbe1a6bc9cd3a06 Apr 23 18:23:38.014920 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:38.014823 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" event={"ID":"1bc0f098-a86d-4d55-9c5e-1a36edbda04f","Type":"ContainerStarted","Data":"84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf"} Apr 23 18:23:38.014920 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:38.014868 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" event={"ID":"1bc0f098-a86d-4d55-9c5e-1a36edbda04f","Type":"ContainerStarted","Data":"3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a"} Apr 23 18:23:38.014920 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:38.014882 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" event={"ID":"1bc0f098-a86d-4d55-9c5e-1a36edbda04f","Type":"ContainerStarted","Data":"7d1ed9f9d7e37c4cad3e8b81dd1c58dec1c09114261df2cafcbe1a6bc9cd3a06"} Apr 23 18:23:38.015138 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:38.014972 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:38.033462 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:38.033404 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" podStartSLOduration=2.033386162 podStartE2EDuration="2.033386162s" podCreationTimestamp="2026-04-23 18:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:23:38.031522518 +0000 UTC m=+1877.326801124" watchObservedRunningTime="2026-04-23 18:23:38.033386162 +0000 UTC m=+1877.328664759" Apr 23 18:23:38.754562 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:38.754517 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.133.0.43:8643/healthz\": dial tcp 10.133.0.43:8643: connect: connection refused" Apr 23 18:23:39.019057 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.018971 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:39.020499 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.020465 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 23 18:23:39.473853 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.473831 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:23:39.536757 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.536721 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhzzw\" (UniqueName: \"kubernetes.io/projected/645360c6-749d-41eb-9e30-9ba98e4a59c6-kube-api-access-lhzzw\") pod \"645360c6-749d-41eb-9e30-9ba98e4a59c6\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " Apr 23 18:23:39.536945 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.536803 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/645360c6-749d-41eb-9e30-9ba98e4a59c6-proxy-tls\") pod \"645360c6-749d-41eb-9e30-9ba98e4a59c6\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " Apr 23 18:23:39.536945 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.536842 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"error-404-isvc-a93f3-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/645360c6-749d-41eb-9e30-9ba98e4a59c6-error-404-isvc-a93f3-kube-rbac-proxy-sar-config\") pod \"645360c6-749d-41eb-9e30-9ba98e4a59c6\" (UID: \"645360c6-749d-41eb-9e30-9ba98e4a59c6\") " Apr 23 18:23:39.537224 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.537198 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/645360c6-749d-41eb-9e30-9ba98e4a59c6-error-404-isvc-a93f3-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "error-404-isvc-a93f3-kube-rbac-proxy-sar-config") pod "645360c6-749d-41eb-9e30-9ba98e4a59c6" (UID: "645360c6-749d-41eb-9e30-9ba98e4a59c6"). InnerVolumeSpecName "error-404-isvc-a93f3-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:23:39.539019 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.538992 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645360c6-749d-41eb-9e30-9ba98e4a59c6-kube-api-access-lhzzw" (OuterVolumeSpecName: "kube-api-access-lhzzw") pod "645360c6-749d-41eb-9e30-9ba98e4a59c6" (UID: "645360c6-749d-41eb-9e30-9ba98e4a59c6"). InnerVolumeSpecName "kube-api-access-lhzzw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:23:39.539268 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.539247 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645360c6-749d-41eb-9e30-9ba98e4a59c6-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "645360c6-749d-41eb-9e30-9ba98e4a59c6" (UID: "645360c6-749d-41eb-9e30-9ba98e4a59c6"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:23:39.638086 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.637996 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/645360c6-749d-41eb-9e30-9ba98e4a59c6-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:23:39.638086 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.638023 2566 reconciler_common.go:299] "Volume detached for volume \"error-404-isvc-a93f3-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/645360c6-749d-41eb-9e30-9ba98e4a59c6-error-404-isvc-a93f3-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:23:39.638086 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:39.638034 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lhzzw\" (UniqueName: \"kubernetes.io/projected/645360c6-749d-41eb-9e30-9ba98e4a59c6-kube-api-access-lhzzw\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:23:40.023860 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.023776 2566 generic.go:358] "Generic (PLEG): container finished" podID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerID="bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66" exitCode=0 Apr 23 18:23:40.024244 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.023858 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" Apr 23 18:23:40.024244 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.023865 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" event={"ID":"645360c6-749d-41eb-9e30-9ba98e4a59c6","Type":"ContainerDied","Data":"bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66"} Apr 23 18:23:40.024244 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.023903 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq" event={"ID":"645360c6-749d-41eb-9e30-9ba98e4a59c6","Type":"ContainerDied","Data":"7224fd1adeb1cfddd74a1772df26a2f3f5dc9844bcba280f38fc9080ddf7fa89"} Apr 23 18:23:40.024244 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.023934 2566 scope.go:117] "RemoveContainer" containerID="c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b" Apr 23 18:23:40.024485 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.024396 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 23 18:23:40.032958 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.032933 2566 scope.go:117] "RemoveContainer" containerID="bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66" Apr 23 18:23:40.041075 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.041055 2566 scope.go:117] "RemoveContainer" containerID="c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b" Apr 23 18:23:40.041363 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:23:40.041340 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b\": container with ID starting with c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b not found: ID does not exist" containerID="c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b" Apr 23 18:23:40.041417 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.041373 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b"} err="failed to get container status \"c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b\": rpc error: code = NotFound desc = could not find container \"c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b\": container with ID starting with c6d53b4e4cd5b15ce7591471be7c8794e639dab7f137e159956524a8b3a2612b not found: ID does not exist" Apr 23 18:23:40.041417 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.041391 2566 scope.go:117] "RemoveContainer" containerID="bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66" Apr 23 18:23:40.041655 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:23:40.041634 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66\": container with ID starting with bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66 not found: ID does not exist" containerID="bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66" Apr 23 18:23:40.041712 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.041661 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66"} err="failed to get container status \"bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66\": rpc error: code = NotFound desc = could not find container \"bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66\": container with ID starting with bff7926a850c28973932758aa6a090aa0a3862e4d1ea67a151e04ceb67e5dd66 not found: ID does not exist" Apr 23 18:23:40.045663 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.045641 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq"] Apr 23 18:23:40.051215 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:40.051192 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-a93f3-predictor-6ff446966c-c2xqq"] Apr 23 18:23:41.254100 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:41.254066 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" path="/var/lib/kubelet/pods/645360c6-749d-41eb-9e30-9ba98e4a59c6/volumes" Apr 23 18:23:43.944098 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:43.944058 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 23 18:23:45.028813 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:45.028785 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:23:45.029381 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:45.029355 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 23 18:23:53.943944 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:53.943895 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.44:8080: connect: connection refused" Apr 23 18:23:55.030326 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:23:55.030266 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 23 18:24:03.944371 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:24:03.944339 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:24:05.030122 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:24:05.030085 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 23 18:24:15.030257 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:24:15.030216 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 23 18:24:25.030147 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:24:25.030114 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:27:21.300904 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:27:21.300868 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:27:21.307817 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:27:21.307793 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:27:21.309382 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:27:21.309364 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:27:21.315624 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:27:21.315609 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:32:21.328025 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:21.327903 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:32:21.334402 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:21.334378 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:32:21.336751 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:21.336734 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:32:21.342930 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:21.342913 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:32:51.273962 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:51.273929 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4"] Apr 23 18:32:51.274461 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:51.274211 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kserve-container" containerID="cri-o://3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a" gracePeriod=30 Apr 23 18:32:51.274461 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:51.274333 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kube-rbac-proxy" containerID="cri-o://84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf" gracePeriod=30 Apr 23 18:32:51.961153 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:51.961113 2566 generic.go:358] "Generic (PLEG): container finished" podID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerID="84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf" exitCode=2 Apr 23 18:32:51.961349 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:51.961187 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" event={"ID":"1bc0f098-a86d-4d55-9c5e-1a36edbda04f","Type":"ContainerDied","Data":"84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf"} Apr 23 18:32:54.313968 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.313943 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:32:54.462213 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.462113 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-proxy-tls\") pod \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " Apr 23 18:32:54.462213 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.462153 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdmkr\" (UniqueName: \"kubernetes.io/projected/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-kube-api-access-zdmkr\") pod \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " Apr 23 18:32:54.462213 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.462189 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"error-404-isvc-7efef-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-error-404-isvc-7efef-kube-rbac-proxy-sar-config\") pod \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\" (UID: \"1bc0f098-a86d-4d55-9c5e-1a36edbda04f\") " Apr 23 18:32:54.462585 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.462556 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-error-404-isvc-7efef-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "error-404-isvc-7efef-kube-rbac-proxy-sar-config") pod "1bc0f098-a86d-4d55-9c5e-1a36edbda04f" (UID: "1bc0f098-a86d-4d55-9c5e-1a36edbda04f"). InnerVolumeSpecName "error-404-isvc-7efef-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:32:54.464524 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.464501 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "1bc0f098-a86d-4d55-9c5e-1a36edbda04f" (UID: "1bc0f098-a86d-4d55-9c5e-1a36edbda04f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:32:54.464604 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.464504 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-kube-api-access-zdmkr" (OuterVolumeSpecName: "kube-api-access-zdmkr") pod "1bc0f098-a86d-4d55-9c5e-1a36edbda04f" (UID: "1bc0f098-a86d-4d55-9c5e-1a36edbda04f"). InnerVolumeSpecName "kube-api-access-zdmkr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:32:54.562826 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.562792 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:32:54.562826 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.562821 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdmkr\" (UniqueName: \"kubernetes.io/projected/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-kube-api-access-zdmkr\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:32:54.562826 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.562834 2566 reconciler_common.go:299] "Volume detached for volume \"error-404-isvc-7efef-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/1bc0f098-a86d-4d55-9c5e-1a36edbda04f-error-404-isvc-7efef-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:32:54.972033 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.971994 2566 generic.go:358] "Generic (PLEG): container finished" podID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerID="3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a" exitCode=0 Apr 23 18:32:54.972219 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.972066 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" Apr 23 18:32:54.972219 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.972073 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" event={"ID":"1bc0f098-a86d-4d55-9c5e-1a36edbda04f","Type":"ContainerDied","Data":"3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a"} Apr 23 18:32:54.972219 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.972108 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4" event={"ID":"1bc0f098-a86d-4d55-9c5e-1a36edbda04f","Type":"ContainerDied","Data":"7d1ed9f9d7e37c4cad3e8b81dd1c58dec1c09114261df2cafcbe1a6bc9cd3a06"} Apr 23 18:32:54.972219 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.972123 2566 scope.go:117] "RemoveContainer" containerID="84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf" Apr 23 18:32:54.980942 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.980926 2566 scope.go:117] "RemoveContainer" containerID="3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a" Apr 23 18:32:54.988445 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.988429 2566 scope.go:117] "RemoveContainer" containerID="84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf" Apr 23 18:32:54.988690 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:32:54.988672 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf\": container with ID starting with 84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf not found: ID does not exist" containerID="84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf" Apr 23 18:32:54.988731 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.988699 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf"} err="failed to get container status \"84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf\": rpc error: code = NotFound desc = could not find container \"84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf\": container with ID starting with 84309d98e080905a78382edf3e3f1c7a2c1094d62678bd7e7b95c5d976a9c8cf not found: ID does not exist" Apr 23 18:32:54.988731 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.988717 2566 scope.go:117] "RemoveContainer" containerID="3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a" Apr 23 18:32:54.988919 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:32:54.988903 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a\": container with ID starting with 3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a not found: ID does not exist" containerID="3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a" Apr 23 18:32:54.988957 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.988926 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a"} err="failed to get container status \"3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a\": rpc error: code = NotFound desc = could not find container \"3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a\": container with ID starting with 3f2d6d231fe34e068040c0b8e9938f9a32cd30838102c80c0ab9956dd76be13a not found: ID does not exist" Apr 23 18:32:54.994443 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:54.994415 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4"] Apr 23 18:32:55.000524 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:55.000503 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-7efef-predictor-86c486dbf-v2nw4"] Apr 23 18:32:55.254948 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:32:55.254870 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" path="/var/lib/kubelet/pods/1bc0f098-a86d-4d55-9c5e-1a36edbda04f/volumes" Apr 23 18:37:21.354287 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:37:21.354179 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:37:21.360908 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:37:21.360886 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:37:21.364972 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:37:21.364953 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:37:21.371438 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:37:21.371418 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-172.ec2.internal_7fc0473024b4c48d914a6628102ac7a2/kube-rbac-proxy-crio/4.log" Apr 23 18:40:35.248843 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:35.248758 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt"] Apr 23 18:40:35.249441 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:35.249128 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kube-rbac-proxy" containerID="cri-o://e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058" gracePeriod=30 Apr 23 18:40:35.249441 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:35.249145 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kserve-container" containerID="cri-o://5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f" gracePeriod=30 Apr 23 18:40:35.564973 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:35.564946 2566 generic.go:358] "Generic (PLEG): container finished" podID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerID="e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058" exitCode=2 Apr 23 18:40:35.565148 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:35.565000 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" event={"ID":"eb47bb72-d086-43d1-9dce-8f0a51cbcc01","Type":"ContainerDied","Data":"e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058"} Apr 23 18:40:36.182185 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182151 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wwc5c/must-gather-jmckl"] Apr 23 18:40:36.182556 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182543 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kube-rbac-proxy" Apr 23 18:40:36.182617 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182558 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kube-rbac-proxy" Apr 23 18:40:36.182617 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182585 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kserve-container" Apr 23 18:40:36.182617 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182590 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kserve-container" Apr 23 18:40:36.182617 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182597 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kube-rbac-proxy" Apr 23 18:40:36.182617 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182603 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kube-rbac-proxy" Apr 23 18:40:36.182617 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182615 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kserve-container" Apr 23 18:40:36.182617 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182620 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kserve-container" Apr 23 18:40:36.182835 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182673 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kube-rbac-proxy" Apr 23 18:40:36.182835 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182683 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kube-rbac-proxy" Apr 23 18:40:36.182835 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182690 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="645360c6-749d-41eb-9e30-9ba98e4a59c6" containerName="kserve-container" Apr 23 18:40:36.182835 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.182695 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="1bc0f098-a86d-4d55-9c5e-1a36edbda04f" containerName="kserve-container" Apr 23 18:40:36.185851 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.185825 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwc5c/must-gather-jmckl" Apr 23 18:40:36.187918 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.187896 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-wwc5c\"/\"default-dockercfg-z5mv7\"" Apr 23 18:40:36.188485 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.188467 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wwc5c\"/\"kube-root-ca.crt\"" Apr 23 18:40:36.188485 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.188478 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wwc5c\"/\"openshift-service-ca.crt\"" Apr 23 18:40:36.203049 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.203027 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wwc5c/must-gather-jmckl"] Apr 23 18:40:36.244099 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.244072 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-must-gather-output\") pod \"must-gather-jmckl\" (UID: \"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9\") " pod="openshift-must-gather-wwc5c/must-gather-jmckl" Apr 23 18:40:36.244229 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.244124 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjjl\" (UniqueName: \"kubernetes.io/projected/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-kube-api-access-wrjjl\") pod \"must-gather-jmckl\" (UID: \"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9\") " pod="openshift-must-gather-wwc5c/must-gather-jmckl" Apr 23 18:40:36.345324 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.345270 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-must-gather-output\") pod \"must-gather-jmckl\" (UID: \"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9\") " pod="openshift-must-gather-wwc5c/must-gather-jmckl" Apr 23 18:40:36.345701 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.345365 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjjl\" (UniqueName: \"kubernetes.io/projected/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-kube-api-access-wrjjl\") pod \"must-gather-jmckl\" (UID: \"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9\") " pod="openshift-must-gather-wwc5c/must-gather-jmckl" Apr 23 18:40:36.345701 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.345597 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-must-gather-output\") pod \"must-gather-jmckl\" (UID: \"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9\") " pod="openshift-must-gather-wwc5c/must-gather-jmckl" Apr 23 18:40:36.356378 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.356352 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjjl\" (UniqueName: \"kubernetes.io/projected/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-kube-api-access-wrjjl\") pod \"must-gather-jmckl\" (UID: \"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9\") " pod="openshift-must-gather-wwc5c/must-gather-jmckl" Apr 23 18:40:36.512519 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.512433 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwc5c/must-gather-jmckl" Apr 23 18:40:36.635019 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.634993 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wwc5c/must-gather-jmckl"] Apr 23 18:40:36.637124 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:40:36.637096 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5140c85f_e11f_4baa_a0fa_fb5bd59d72d9.slice/crio-77280ba8565ff6b0a7a59af2e03bcccced6ef1c8a9d9d1ff7ee297b5ecf567a2 WatchSource:0}: Error finding container 77280ba8565ff6b0a7a59af2e03bcccced6ef1c8a9d9d1ff7ee297b5ecf567a2: Status 404 returned error can't find the container with id 77280ba8565ff6b0a7a59af2e03bcccced6ef1c8a9d9d1ff7ee297b5ecf567a2 Apr 23 18:40:36.638893 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:36.638874 2566 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:40:37.575883 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:37.575826 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwc5c/must-gather-jmckl" event={"ID":"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9","Type":"ContainerStarted","Data":"77280ba8565ff6b0a7a59af2e03bcccced6ef1c8a9d9d1ff7ee297b5ecf567a2"} Apr 23 18:40:38.938690 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:38.938642 2566 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.133.0.44:8643/healthz\": dial tcp 10.133.0.44:8643: connect: connection refused" Apr 23 18:40:40.544734 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.544705 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:40:40.591216 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.591176 2566 generic.go:358] "Generic (PLEG): container finished" podID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerID="5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f" exitCode=0 Apr 23 18:40:40.591420 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.591270 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" Apr 23 18:40:40.591420 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.591266 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" event={"ID":"eb47bb72-d086-43d1-9dce-8f0a51cbcc01","Type":"ContainerDied","Data":"5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f"} Apr 23 18:40:40.591420 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.591358 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt" event={"ID":"eb47bb72-d086-43d1-9dce-8f0a51cbcc01","Type":"ContainerDied","Data":"a12e88eb4fd69fcd886a12fd843f044b5bb83238b47e7a917c1484841bb09908"} Apr 23 18:40:40.591420 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.591381 2566 scope.go:117] "RemoveContainer" containerID="e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058" Apr 23 18:40:40.688874 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.688830 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"error-404-isvc-87aee-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-error-404-isvc-87aee-kube-rbac-proxy-sar-config\") pod \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " Apr 23 18:40:40.689074 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.688900 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-proxy-tls\") pod \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " Apr 23 18:40:40.689074 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.688942 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtk84\" (UniqueName: \"kubernetes.io/projected/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-kube-api-access-xtk84\") pod \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\" (UID: \"eb47bb72-d086-43d1-9dce-8f0a51cbcc01\") " Apr 23 18:40:40.689326 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.689281 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-error-404-isvc-87aee-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "error-404-isvc-87aee-kube-rbac-proxy-sar-config") pod "eb47bb72-d086-43d1-9dce-8f0a51cbcc01" (UID: "eb47bb72-d086-43d1-9dce-8f0a51cbcc01"). InnerVolumeSpecName "error-404-isvc-87aee-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:40:40.691627 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.691592 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-kube-api-access-xtk84" (OuterVolumeSpecName: "kube-api-access-xtk84") pod "eb47bb72-d086-43d1-9dce-8f0a51cbcc01" (UID: "eb47bb72-d086-43d1-9dce-8f0a51cbcc01"). InnerVolumeSpecName "kube-api-access-xtk84". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:40:40.691757 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.691647 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "eb47bb72-d086-43d1-9dce-8f0a51cbcc01" (UID: "eb47bb72-d086-43d1-9dce-8f0a51cbcc01"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:40:40.755176 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.755019 2566 scope.go:117] "RemoveContainer" containerID="5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f" Apr 23 18:40:40.762765 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.762748 2566 scope.go:117] "RemoveContainer" containerID="e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058" Apr 23 18:40:40.763055 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:40:40.763033 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058\": container with ID starting with e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058 not found: ID does not exist" containerID="e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058" Apr 23 18:40:40.763094 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.763065 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058"} err="failed to get container status \"e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058\": rpc error: code = NotFound desc = could not find container \"e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058\": container with ID starting with e14d60e69bc1589a7cef04f0a150234bb6f4181a12af359eb8a76ec5fe0c3058 not found: ID does not exist" Apr 23 18:40:40.763094 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.763085 2566 scope.go:117] "RemoveContainer" containerID="5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f" Apr 23 18:40:40.763360 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:40:40.763340 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f\": container with ID starting with 5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f not found: ID does not exist" containerID="5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f" Apr 23 18:40:40.763419 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.763367 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f"} err="failed to get container status \"5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f\": rpc error: code = NotFound desc = could not find container \"5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f\": container with ID starting with 5b69945e138ce20428459b6fa3a4f2e102276b48004a1bcc2275b3700fd88b7f not found: ID does not exist" Apr 23 18:40:40.790373 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.790347 2566 reconciler_common.go:299] "Volume detached for volume \"error-404-isvc-87aee-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-error-404-isvc-87aee-kube-rbac-proxy-sar-config\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:40:40.790373 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.790376 2566 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-proxy-tls\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:40:40.790526 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.790386 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xtk84\" (UniqueName: \"kubernetes.io/projected/eb47bb72-d086-43d1-9dce-8f0a51cbcc01-kube-api-access-xtk84\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:40:40.930629 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.930603 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt"] Apr 23 18:40:40.933998 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:40.933975 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-87aee-predictor-54b57bf5ff-2g5nt"] Apr 23 18:40:41.255688 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:41.255657 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" path="/var/lib/kubelet/pods/eb47bb72-d086-43d1-9dce-8f0a51cbcc01/volumes" Apr 23 18:40:41.604880 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:41.604837 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwc5c/must-gather-jmckl" event={"ID":"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9","Type":"ContainerStarted","Data":"c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71"} Apr 23 18:40:41.604880 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:41.604880 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwc5c/must-gather-jmckl" event={"ID":"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9","Type":"ContainerStarted","Data":"22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64"} Apr 23 18:40:41.621117 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:40:41.621070 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wwc5c/must-gather-jmckl" podStartSLOduration=1.445143673 podStartE2EDuration="5.62105584s" podCreationTimestamp="2026-04-23 18:40:36 +0000 UTC" firstStartedPulling="2026-04-23 18:40:36.639038552 +0000 UTC m=+2895.934317127" lastFinishedPulling="2026-04-23 18:40:40.814950717 +0000 UTC m=+2900.110229294" observedRunningTime="2026-04-23 18:40:41.619917301 +0000 UTC m=+2900.915195899" watchObservedRunningTime="2026-04-23 18:40:41.62105584 +0000 UTC m=+2900.916334440" Apr 23 18:41:01.685624 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:01.685589 2566 generic.go:358] "Generic (PLEG): container finished" podID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" containerID="22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64" exitCode=0 Apr 23 18:41:01.686044 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:01.685677 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wwc5c/must-gather-jmckl" event={"ID":"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9","Type":"ContainerDied","Data":"22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64"} Apr 23 18:41:01.686094 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:01.686050 2566 scope.go:117] "RemoveContainer" containerID="22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64" Apr 23 18:41:02.545367 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:02.545336 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wwc5c_must-gather-jmckl_5140c85f-e11f-4baa-a0fa-fb5bd59d72d9/gather/0.log" Apr 23 18:41:05.997597 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:05.997566 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-jhvgn_bf59011d-e01e-49f9-b468-33af8f5a6489/global-pull-secret-syncer/0.log" Apr 23 18:41:06.146003 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:06.145970 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-kjt2w_8f79dd76-5ae2-47b7-bd62-86d231ac80ff/konnectivity-agent/0.log" Apr 23 18:41:06.219514 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:06.219484 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-136-172.ec2.internal_feca641f7e256521d5e07f060738f192/haproxy/0.log" Apr 23 18:41:08.034557 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.034518 2566 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wwc5c/must-gather-jmckl"] Apr 23 18:41:08.034972 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.034729 2566 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-must-gather-wwc5c/must-gather-jmckl" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" containerName="copy" containerID="cri-o://c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71" gracePeriod=2 Apr 23 18:41:08.039134 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.038885 2566 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wwc5c/must-gather-jmckl"] Apr 23 18:41:08.267856 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.267834 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wwc5c_must-gather-jmckl_5140c85f-e11f-4baa-a0fa-fb5bd59d72d9/copy/0.log" Apr 23 18:41:08.268205 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.268189 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwc5c/must-gather-jmckl" Apr 23 18:41:08.269799 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.269775 2566 status_manager.go:895] "Failed to get status for pod" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" pod="openshift-must-gather-wwc5c/must-gather-jmckl" err="pods \"must-gather-jmckl\" is forbidden: User \"system:node:ip-10-0-136-172.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wwc5c\": no relationship found between node 'ip-10-0-136-172.ec2.internal' and this object" Apr 23 18:41:08.336004 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.335963 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-must-gather-output\") pod \"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9\" (UID: \"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9\") " Apr 23 18:41:08.336195 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.336052 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrjjl\" (UniqueName: \"kubernetes.io/projected/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-kube-api-access-wrjjl\") pod \"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9\" (UID: \"5140c85f-e11f-4baa-a0fa-fb5bd59d72d9\") " Apr 23 18:41:08.337513 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.337487 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" (UID: "5140c85f-e11f-4baa-a0fa-fb5bd59d72d9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:41:08.338397 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.338373 2566 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-kube-api-access-wrjjl" (OuterVolumeSpecName: "kube-api-access-wrjjl") pod "5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" (UID: "5140c85f-e11f-4baa-a0fa-fb5bd59d72d9"). InnerVolumeSpecName "kube-api-access-wrjjl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:41:08.436516 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.436475 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wrjjl\" (UniqueName: \"kubernetes.io/projected/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-kube-api-access-wrjjl\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:41:08.436516 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.436507 2566 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9-must-gather-output\") on node \"ip-10-0-136-172.ec2.internal\" DevicePath \"\"" Apr 23 18:41:08.715276 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.715192 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wwc5c_must-gather-jmckl_5140c85f-e11f-4baa-a0fa-fb5bd59d72d9/copy/0.log" Apr 23 18:41:08.715592 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.715568 2566 generic.go:358] "Generic (PLEG): container finished" podID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" containerID="c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71" exitCode=143 Apr 23 18:41:08.715659 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.715620 2566 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wwc5c/must-gather-jmckl" Apr 23 18:41:08.715714 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.715670 2566 scope.go:117] "RemoveContainer" containerID="c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71" Apr 23 18:41:08.717415 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.717384 2566 status_manager.go:895] "Failed to get status for pod" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" pod="openshift-must-gather-wwc5c/must-gather-jmckl" err="pods \"must-gather-jmckl\" is forbidden: User \"system:node:ip-10-0-136-172.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wwc5c\": no relationship found between node 'ip-10-0-136-172.ec2.internal' and this object" Apr 23 18:41:08.724434 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.724418 2566 scope.go:117] "RemoveContainer" containerID="22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64" Apr 23 18:41:08.725894 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.725865 2566 status_manager.go:895] "Failed to get status for pod" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" pod="openshift-must-gather-wwc5c/must-gather-jmckl" err="pods \"must-gather-jmckl\" is forbidden: User \"system:node:ip-10-0-136-172.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wwc5c\": no relationship found between node 'ip-10-0-136-172.ec2.internal' and this object" Apr 23 18:41:08.736298 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.736278 2566 scope.go:117] "RemoveContainer" containerID="c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71" Apr 23 18:41:08.736642 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:41:08.736621 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71\": container with ID starting with c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71 not found: ID does not exist" containerID="c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71" Apr 23 18:41:08.736694 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.736650 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71"} err="failed to get container status \"c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71\": rpc error: code = NotFound desc = could not find container \"c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71\": container with ID starting with c98643633bef8bbeaeccf19eaabcd1b64e83d1fcd76b1fc011cb98adee629d71 not found: ID does not exist" Apr 23 18:41:08.736694 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.736669 2566 scope.go:117] "RemoveContainer" containerID="22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64" Apr 23 18:41:08.736897 ip-10-0-136-172 kubenswrapper[2566]: E0423 18:41:08.736877 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64\": container with ID starting with 22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64 not found: ID does not exist" containerID="22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64" Apr 23 18:41:08.736938 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:08.736906 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64"} err="failed to get container status \"22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64\": rpc error: code = NotFound desc = could not find container \"22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64\": container with ID starting with 22b3aeceffcea2d38f341b7b6bd0c5b9683f1f150f8e6ed52e24ae64f5707d64 not found: ID does not exist" Apr 23 18:41:09.256089 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.256056 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" path="/var/lib/kubelet/pods/5140c85f-e11f-4baa-a0fa-fb5bd59d72d9/volumes" Apr 23 18:41:09.636485 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.636450 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_ba35557c-7e83-4e76-966c-6bd98124864c/alertmanager/0.log" Apr 23 18:41:09.659151 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.659121 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_ba35557c-7e83-4e76-966c-6bd98124864c/config-reloader/0.log" Apr 23 18:41:09.682006 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.681976 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_ba35557c-7e83-4e76-966c-6bd98124864c/kube-rbac-proxy-web/0.log" Apr 23 18:41:09.705167 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.705144 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_ba35557c-7e83-4e76-966c-6bd98124864c/kube-rbac-proxy/0.log" Apr 23 18:41:09.727550 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.727515 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_ba35557c-7e83-4e76-966c-6bd98124864c/kube-rbac-proxy-metric/0.log" Apr 23 18:41:09.749869 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.749841 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_ba35557c-7e83-4e76-966c-6bd98124864c/prom-label-proxy/0.log" Apr 23 18:41:09.773973 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.773949 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_ba35557c-7e83-4e76-966c-6bd98124864c/init-config-reloader/0.log" Apr 23 18:41:09.811846 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.811818 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-75587bd455-bl4fl_1992d43a-7589-4ec9-b815-8a2c284b237c/cluster-monitoring-operator/0.log" Apr 23 18:41:09.837978 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.837953 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-5x2l6_d716c310-cac3-4f4a-9142-7e64ec9b5023/kube-state-metrics/0.log" Apr 23 18:41:09.869610 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.869549 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-5x2l6_d716c310-cac3-4f4a-9142-7e64ec9b5023/kube-rbac-proxy-main/0.log" Apr 23 18:41:09.896354 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:09.896265 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-5x2l6_d716c310-cac3-4f4a-9142-7e64ec9b5023/kube-rbac-proxy-self/0.log" Apr 23 18:41:10.199292 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.199223 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-q6dlk_0b66c287-b88d-4f3f-8d42-f4162338bc96/node-exporter/0.log" Apr 23 18:41:10.224202 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.224180 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-q6dlk_0b66c287-b88d-4f3f-8d42-f4162338bc96/kube-rbac-proxy/0.log" Apr 23 18:41:10.249715 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.249693 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-q6dlk_0b66c287-b88d-4f3f-8d42-f4162338bc96/init-textfile/0.log" Apr 23 18:41:10.282437 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.282409 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-8jw2j_a2590fc7-19e5-4364-9e78-dd69392e0609/kube-rbac-proxy-main/0.log" Apr 23 18:41:10.307181 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.307153 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-8jw2j_a2590fc7-19e5-4364-9e78-dd69392e0609/kube-rbac-proxy-self/0.log" Apr 23 18:41:10.335647 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.335623 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-8jw2j_a2590fc7-19e5-4364-9e78-dd69392e0609/openshift-state-metrics/0.log" Apr 23 18:41:10.632037 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.632010 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-z9wc5_9d01bf3e-4061-4f32-a69a-11d933d7b9bc/prometheus-operator/0.log" Apr 23 18:41:10.658910 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.658878 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-z9wc5_9d01bf3e-4061-4f32-a69a-11d933d7b9bc/kube-rbac-proxy/0.log" Apr 23 18:41:10.688146 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.688122 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-57cf98b594-qqj6h_8d0b2147-611c-458a-9d92-eae8e9e49ad0/prometheus-operator-admission-webhook/0.log" Apr 23 18:41:10.803463 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.803438 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59ffcb8856-jbbq9_c90f8891-3148-4c39-8562-85ceb05c9358/thanos-query/0.log" Apr 23 18:41:10.833473 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.833440 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59ffcb8856-jbbq9_c90f8891-3148-4c39-8562-85ceb05c9358/kube-rbac-proxy-web/0.log" Apr 23 18:41:10.868976 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.868947 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59ffcb8856-jbbq9_c90f8891-3148-4c39-8562-85ceb05c9358/kube-rbac-proxy/0.log" Apr 23 18:41:10.896551 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.896475 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59ffcb8856-jbbq9_c90f8891-3148-4c39-8562-85ceb05c9358/prom-label-proxy/0.log" Apr 23 18:41:10.918281 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.918256 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59ffcb8856-jbbq9_c90f8891-3148-4c39-8562-85ceb05c9358/kube-rbac-proxy-rules/0.log" Apr 23 18:41:10.950093 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:10.950072 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59ffcb8856-jbbq9_c90f8891-3148-4c39-8562-85ceb05c9358/kube-rbac-proxy-metrics/0.log" Apr 23 18:41:12.012453 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.012426 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-cb95c66f6-jwhmv_fc849c85-296b-4ebd-9bd4-27f9edfd3785/networking-console-plugin/0.log" Apr 23 18:41:12.454695 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.454662 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/1.log" Apr 23 18:41:12.459343 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.459322 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-5kpdc_9334253b-6eff-4ad7-9cc7-5d96bdb994ad/console-operator/2.log" Apr 23 18:41:12.900501 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.900470 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-6bcc868b7-8dvlx_f54a175e-d59b-46e9-b245-82f3b11123d9/download-server/0.log" Apr 23 18:41:12.968913 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.968881 2566 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr"] Apr 23 18:41:12.969263 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969251 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" containerName="copy" Apr 23 18:41:12.969339 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969264 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" containerName="copy" Apr 23 18:41:12.969339 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969288 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kserve-container" Apr 23 18:41:12.969339 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969294 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kserve-container" Apr 23 18:41:12.969339 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969325 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" containerName="gather" Apr 23 18:41:12.969339 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969334 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" containerName="gather" Apr 23 18:41:12.969516 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969345 2566 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kube-rbac-proxy" Apr 23 18:41:12.969516 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969350 2566 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kube-rbac-proxy" Apr 23 18:41:12.969516 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969423 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kube-rbac-proxy" Apr 23 18:41:12.969516 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969430 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" containerName="gather" Apr 23 18:41:12.969516 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969439 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="eb47bb72-d086-43d1-9dce-8f0a51cbcc01" containerName="kserve-container" Apr 23 18:41:12.969516 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.969447 2566 memory_manager.go:356] "RemoveStaleState removing state" podUID="5140c85f-e11f-4baa-a0fa-fb5bd59d72d9" containerName="copy" Apr 23 18:41:12.974713 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.974692 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:12.977071 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.977049 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-hqhtf\"/\"openshift-service-ca.crt\"" Apr 23 18:41:12.977199 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.977129 2566 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-hqhtf\"/\"default-dockercfg-7jg68\"" Apr 23 18:41:12.977552 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.977536 2566 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-hqhtf\"/\"kube-root-ca.crt\"" Apr 23 18:41:12.983191 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:12.983169 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr"] Apr 23 18:41:13.070762 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.070725 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-lib-modules\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.070762 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.070767 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-sys\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.071190 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.070795 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-podres\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.071190 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.070874 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-proc\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.071190 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.070908 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgww\" (UniqueName: \"kubernetes.io/projected/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-kube-api-access-mhgww\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.172374 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.172269 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-sys\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.172374 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.172350 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-podres\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.172374 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.172377 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-proc\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.172567 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.172395 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mhgww\" (UniqueName: \"kubernetes.io/projected/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-kube-api-access-mhgww\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.172567 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.172411 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-sys\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.172567 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.172500 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-proc\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.172567 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.172508 2566 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-lib-modules\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.172567 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.172527 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-podres\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.172725 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.172602 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-lib-modules\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.180747 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.180716 2566 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhgww\" (UniqueName: \"kubernetes.io/projected/5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8-kube-api-access-mhgww\") pod \"perf-node-gather-daemonset-2wfrr\" (UID: \"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.285655 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.285622 2566 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.353618 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.353529 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_volume-data-source-validator-7c6cbb6c87-g4wkz_549ece9d-4598-441f-a940-cecc154fbf7e/volume-data-source-validator/0.log" Apr 23 18:41:13.419846 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.419776 2566 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr"] Apr 23 18:41:13.422134 ip-10-0-136-172 kubenswrapper[2566]: W0423 18:41:13.422106 2566 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5a78fd9f_c7b7_49c3_ae2f_4f5e2d8687c8.slice/crio-8fe737cf04d95cf30ffafd4ffa030db06cd8211430f4e0d173cb7a5b9da85c02 WatchSource:0}: Error finding container 8fe737cf04d95cf30ffafd4ffa030db06cd8211430f4e0d173cb7a5b9da85c02: Status 404 returned error can't find the container with id 8fe737cf04d95cf30ffafd4ffa030db06cd8211430f4e0d173cb7a5b9da85c02 Apr 23 18:41:13.736045 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.735938 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" event={"ID":"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8","Type":"ContainerStarted","Data":"0b379f555f30b0e9487e9adafe7fd5295386ebe5494b592fa091a665ba4ca9a1"} Apr 23 18:41:13.736045 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.735988 2566 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" event={"ID":"5a78fd9f-c7b7-49c3-ae2f-4f5e2d8687c8","Type":"ContainerStarted","Data":"8fe737cf04d95cf30ffafd4ffa030db06cd8211430f4e0d173cb7a5b9da85c02"} Apr 23 18:41:13.736045 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.736014 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:13.754217 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:13.754167 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" podStartSLOduration=1.7541537539999998 podStartE2EDuration="1.754153754s" podCreationTimestamp="2026-04-23 18:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:41:13.75213393 +0000 UTC m=+2933.047412524" watchObservedRunningTime="2026-04-23 18:41:13.754153754 +0000 UTC m=+2933.049432350" Apr 23 18:41:14.039833 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:14.039809 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-ptrbw_158cc267-e1dc-48e1-90d2-dba2495a9735/dns/0.log" Apr 23 18:41:14.060459 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:14.060434 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-ptrbw_158cc267-e1dc-48e1-90d2-dba2495a9735/kube-rbac-proxy/0.log" Apr 23 18:41:14.207179 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:14.207123 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-msx9j_88671ae9-14c3-476e-98a0-61200eda94f5/dns-node-resolver/0.log" Apr 23 18:41:14.637222 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:14.637190 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-d4hwd_bd91136a-6313-4cae-bd06-a32a9ec8e0cb/node-ca/0.log" Apr 23 18:41:15.441555 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:15.441518 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-54ff9bfc64-gddsn_274b9ba8-597e-49dd-9ba0-e1243dc7b259/router/0.log" Apr 23 18:41:15.781333 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:15.781224 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-gm4kb_72010597-3b11-4326-ad5d-3af1af12b593/serve-healthcheck-canary/0.log" Apr 23 18:41:16.189085 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:16.189046 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-585dfdc468-kfcjl_dd76c0f6-b46d-43a0-a71f-55a695fd6d99/insights-operator/1.log" Apr 23 18:41:16.189257 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:16.189147 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-585dfdc468-kfcjl_dd76c0f6-b46d-43a0-a71f-55a695fd6d99/insights-operator/0.log" Apr 23 18:41:16.209573 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:16.209542 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-64p9d_e8184bdb-fe3d-45b0-9c77-72fa68eb4767/kube-rbac-proxy/0.log" Apr 23 18:41:16.231034 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:16.231002 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-64p9d_e8184bdb-fe3d-45b0-9c77-72fa68eb4767/exporter/0.log" Apr 23 18:41:16.251688 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:16.251662 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-64p9d_e8184bdb-fe3d-45b0-9c77-72fa68eb4767/extractor/0.log" Apr 23 18:41:18.766275 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:18.766240 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_odh-model-controller-696fc77849-sxpn7_4bb588be-ae32-4e65-a5f9-3ebc133a9691/manager/0.log" Apr 23 18:41:18.785633 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:18.785604 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_s3-init-pss6t_acda807c-12f0-4da8-9932-7882d0ba9f05/s3-init/0.log" Apr 23 18:41:18.811635 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:18.811601 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-86cc847c5c-7jdzk_3662d547-b89a-4fd9-a546-64b76599844f/seaweedfs/0.log" Apr 23 18:41:19.748843 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:19.748811 2566 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-2wfrr" Apr 23 18:41:22.971505 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:22.971429 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-74bb7799d9-mgsjw_45d25647-0ba1-4d11-9101-913fb12b43ac/migrator/0.log" Apr 23 18:41:22.992825 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:22.992796 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-74bb7799d9-mgsjw_45d25647-0ba1-4d11-9101-913fb12b43ac/graceful-termination/0.log" Apr 23 18:41:23.313434 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:23.313404 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6769c5d45-bfm5m_df076eb4-c3f3-4cbf-8cee-a735d1572b5b/kube-storage-version-migrator-operator/1.log" Apr 23 18:41:23.314344 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:23.314291 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6769c5d45-bfm5m_df076eb4-c3f3-4cbf-8cee-a735d1572b5b/kube-storage-version-migrator-operator/0.log" Apr 23 18:41:24.280246 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:24.280211 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-48gh2_2f0abbbd-0b22-4bf4-828e-8e3f05035c84/kube-multus/0.log" Apr 23 18:41:24.331284 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:24.331207 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-b4p7v_0eebe585-3752-4ef2-ba49-6f427a3ebdce/kube-multus-additional-cni-plugins/0.log" Apr 23 18:41:24.357819 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:24.357794 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-b4p7v_0eebe585-3752-4ef2-ba49-6f427a3ebdce/egress-router-binary-copy/0.log" Apr 23 18:41:24.383056 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:24.383032 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-b4p7v_0eebe585-3752-4ef2-ba49-6f427a3ebdce/cni-plugins/0.log" Apr 23 18:41:24.405173 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:24.405153 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-b4p7v_0eebe585-3752-4ef2-ba49-6f427a3ebdce/bond-cni-plugin/0.log" Apr 23 18:41:24.427030 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:24.427009 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-b4p7v_0eebe585-3752-4ef2-ba49-6f427a3ebdce/routeoverride-cni/0.log" Apr 23 18:41:24.450123 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:24.450102 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-b4p7v_0eebe585-3752-4ef2-ba49-6f427a3ebdce/whereabouts-cni-bincopy/0.log" Apr 23 18:41:24.472195 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:24.472169 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-b4p7v_0eebe585-3752-4ef2-ba49-6f427a3ebdce/whereabouts-cni/0.log" Apr 23 18:41:24.952650 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:24.952608 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-96rvc_ec0108e4-36f5-4959-99b0-8fe6326c7aaa/network-metrics-daemon/0.log" Apr 23 18:41:24.973231 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:24.973208 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-96rvc_ec0108e4-36f5-4959-99b0-8fe6326c7aaa/kube-rbac-proxy/0.log" Apr 23 18:41:25.790689 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:25.790662 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7wdbp_ca2e53d1-74cd-4370-b1cd-1bb46d1f5076/ovn-controller/0.log" Apr 23 18:41:25.819926 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:25.819882 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7wdbp_ca2e53d1-74cd-4370-b1cd-1bb46d1f5076/ovn-acl-logging/0.log" Apr 23 18:41:25.839294 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:25.839255 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7wdbp_ca2e53d1-74cd-4370-b1cd-1bb46d1f5076/kube-rbac-proxy-node/0.log" Apr 23 18:41:25.862004 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:25.861955 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7wdbp_ca2e53d1-74cd-4370-b1cd-1bb46d1f5076/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 18:41:25.881580 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:25.881557 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7wdbp_ca2e53d1-74cd-4370-b1cd-1bb46d1f5076/northd/0.log" Apr 23 18:41:25.905624 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:25.905602 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7wdbp_ca2e53d1-74cd-4370-b1cd-1bb46d1f5076/nbdb/0.log" Apr 23 18:41:25.928841 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:25.928811 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7wdbp_ca2e53d1-74cd-4370-b1cd-1bb46d1f5076/sbdb/0.log" Apr 23 18:41:26.035762 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:26.035732 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7wdbp_ca2e53d1-74cd-4370-b1cd-1bb46d1f5076/ovnkube-controller/0.log" Apr 23 18:41:27.640981 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:27.640951 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-8894fc9bd-pchtp_790bfe6f-76d8-43c6-a545-a921f86e66cd/check-endpoints/0.log" Apr 23 18:41:27.729064 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:27.729026 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-jd2kh_2b7df6cc-2be6-40b1-b7dd-9d8f310a72dc/network-check-target-container/0.log" Apr 23 18:41:28.709648 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:28.709622 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-slvjl_a8dcfc70-4d8f-4caa-a6df-98b824d34a78/iptables-alerter/0.log" Apr 23 18:41:29.353930 ip-10-0-136-172 kubenswrapper[2566]: I0423 18:41:29.353902 2566 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-6b8hr_bbd132ba-580f-4003-8b35-f82ad6b7ccf0/tuned/0.log"