Apr 23 17:52:16.061265 ip-10-0-131-177 systemd[1]: Starting Kubernetes Kubelet... Apr 23 17:52:16.630248 ip-10-0-131-177 kubenswrapper[2565]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:16.630248 ip-10-0-131-177 kubenswrapper[2565]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 23 17:52:16.630248 ip-10-0-131-177 kubenswrapper[2565]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:16.630248 ip-10-0-131-177 kubenswrapper[2565]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 17:52:16.630248 ip-10-0-131-177 kubenswrapper[2565]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:16.633232 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.632992 2565 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 17:52:16.640406 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640379 2565 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:16.640406 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640400 2565 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:16.640406 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640403 2565 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:16.640406 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640406 2565 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:16.640406 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640409 2565 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:16.640406 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640412 2565 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:16.640406 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640415 2565 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640419 2565 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640422 2565 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640424 2565 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640427 2565 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640430 2565 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640432 2565 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640435 2565 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640437 2565 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640441 2565 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640444 2565 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640447 2565 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640449 2565 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640452 2565 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640454 2565 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640457 2565 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640460 2565 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640465 2565 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640470 2565 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:16.640655 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640474 2565 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640477 2565 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640480 2565 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640483 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640486 2565 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640489 2565 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640492 2565 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640495 2565 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640498 2565 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640500 2565 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640503 2565 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640506 2565 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640509 2565 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640512 2565 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640514 2565 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640518 2565 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640521 2565 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640524 2565 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640526 2565 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:16.641135 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640529 2565 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640531 2565 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640534 2565 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640536 2565 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640539 2565 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640542 2565 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640545 2565 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640547 2565 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640550 2565 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640552 2565 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640555 2565 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640558 2565 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640561 2565 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640564 2565 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640567 2565 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640569 2565 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640572 2565 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640575 2565 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640577 2565 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:16.641603 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640580 2565 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640583 2565 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640586 2565 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640588 2565 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640591 2565 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640595 2565 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640599 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640601 2565 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640604 2565 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640607 2565 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640610 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640612 2565 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640616 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640618 2565 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640621 2565 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640624 2565 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640627 2565 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640630 2565 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640633 2565 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640636 2565 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:16.642079 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640639 2565 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640642 2565 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.640644 2565 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641064 2565 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641071 2565 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641074 2565 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641077 2565 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641080 2565 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641083 2565 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641086 2565 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641088 2565 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641092 2565 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641095 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641098 2565 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641101 2565 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641103 2565 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641106 2565 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641108 2565 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641111 2565 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641114 2565 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:16.642597 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641116 2565 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641119 2565 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641122 2565 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641124 2565 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641127 2565 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641130 2565 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641132 2565 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641136 2565 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641139 2565 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641142 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641144 2565 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641147 2565 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641150 2565 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641153 2565 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641156 2565 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641159 2565 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641162 2565 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641164 2565 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641168 2565 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641170 2565 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:16.643102 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641173 2565 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641175 2565 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641178 2565 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641180 2565 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641183 2565 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641186 2565 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641188 2565 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641191 2565 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641194 2565 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641196 2565 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641199 2565 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641201 2565 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641205 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641207 2565 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641210 2565 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641213 2565 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641215 2565 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641218 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641220 2565 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641223 2565 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:16.643602 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641226 2565 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641228 2565 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641230 2565 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641233 2565 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641236 2565 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641238 2565 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641241 2565 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641244 2565 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641248 2565 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641251 2565 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641254 2565 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641257 2565 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641261 2565 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641264 2565 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641267 2565 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641270 2565 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641272 2565 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641275 2565 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641279 2565 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:16.644108 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641281 2565 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641284 2565 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641286 2565 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641289 2565 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641291 2565 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641294 2565 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641298 2565 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641300 2565 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641303 2565 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.641305 2565 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641384 2565 flags.go:64] FLAG: --address="0.0.0.0" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641392 2565 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641399 2565 flags.go:64] FLAG: --anonymous-auth="true" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641410 2565 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641416 2565 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641419 2565 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641424 2565 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641428 2565 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641431 2565 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641434 2565 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641438 2565 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 23 17:52:16.644572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641442 2565 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641445 2565 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641448 2565 flags.go:64] FLAG: --cgroup-root="" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641451 2565 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641454 2565 flags.go:64] FLAG: --client-ca-file="" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641456 2565 flags.go:64] FLAG: --cloud-config="" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641459 2565 flags.go:64] FLAG: --cloud-provider="external" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641462 2565 flags.go:64] FLAG: --cluster-dns="[]" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641467 2565 flags.go:64] FLAG: --cluster-domain="" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641469 2565 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641473 2565 flags.go:64] FLAG: --config-dir="" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641476 2565 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641479 2565 flags.go:64] FLAG: --container-log-max-files="5" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641483 2565 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641486 2565 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641489 2565 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641492 2565 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641495 2565 flags.go:64] FLAG: --contention-profiling="false" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641498 2565 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641501 2565 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641504 2565 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641507 2565 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641511 2565 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641514 2565 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641517 2565 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 23 17:52:16.645107 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641520 2565 flags.go:64] FLAG: --enable-load-reader="false" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641523 2565 flags.go:64] FLAG: --enable-server="true" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641527 2565 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641531 2565 flags.go:64] FLAG: --event-burst="100" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641534 2565 flags.go:64] FLAG: --event-qps="50" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641537 2565 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641540 2565 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641544 2565 flags.go:64] FLAG: --eviction-hard="" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641548 2565 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641550 2565 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641553 2565 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641556 2565 flags.go:64] FLAG: --eviction-soft="" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641559 2565 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641562 2565 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641565 2565 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641569 2565 flags.go:64] FLAG: --experimental-mounter-path="" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641572 2565 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641575 2565 flags.go:64] FLAG: --fail-swap-on="true" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641578 2565 flags.go:64] FLAG: --feature-gates="" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641582 2565 flags.go:64] FLAG: --file-check-frequency="20s" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641585 2565 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641588 2565 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641591 2565 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641594 2565 flags.go:64] FLAG: --healthz-port="10248" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641597 2565 flags.go:64] FLAG: --help="false" Apr 23 17:52:16.645739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641600 2565 flags.go:64] FLAG: --hostname-override="ip-10-0-131-177.ec2.internal" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641604 2565 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641607 2565 flags.go:64] FLAG: --http-check-frequency="20s" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641610 2565 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641614 2565 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641617 2565 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641620 2565 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641623 2565 flags.go:64] FLAG: --image-service-endpoint="" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641626 2565 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641630 2565 flags.go:64] FLAG: --kube-api-burst="100" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641633 2565 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641636 2565 flags.go:64] FLAG: --kube-api-qps="50" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641639 2565 flags.go:64] FLAG: --kube-reserved="" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641642 2565 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641644 2565 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641647 2565 flags.go:64] FLAG: --kubelet-cgroups="" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641650 2565 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641653 2565 flags.go:64] FLAG: --lock-file="" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641656 2565 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641658 2565 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641661 2565 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641666 2565 flags.go:64] FLAG: --log-json-split-stream="false" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641669 2565 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641672 2565 flags.go:64] FLAG: --log-text-split-stream="false" Apr 23 17:52:16.646358 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641675 2565 flags.go:64] FLAG: --logging-format="text" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641678 2565 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641681 2565 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641684 2565 flags.go:64] FLAG: --manifest-url="" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641687 2565 flags.go:64] FLAG: --manifest-url-header="" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641692 2565 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641695 2565 flags.go:64] FLAG: --max-open-files="1000000" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641699 2565 flags.go:64] FLAG: --max-pods="110" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641702 2565 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641705 2565 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641708 2565 flags.go:64] FLAG: --memory-manager-policy="None" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641711 2565 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641714 2565 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641717 2565 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641720 2565 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641728 2565 flags.go:64] FLAG: --node-status-max-images="50" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641731 2565 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641734 2565 flags.go:64] FLAG: --oom-score-adj="-999" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641738 2565 flags.go:64] FLAG: --pod-cidr="" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641741 2565 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641746 2565 flags.go:64] FLAG: --pod-manifest-path="" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641749 2565 flags.go:64] FLAG: --pod-max-pids="-1" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641752 2565 flags.go:64] FLAG: --pods-per-core="0" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641755 2565 flags.go:64] FLAG: --port="10250" Apr 23 17:52:16.647002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641758 2565 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641761 2565 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0fcf0c07b29a9c41a" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641765 2565 flags.go:64] FLAG: --qos-reserved="" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641768 2565 flags.go:64] FLAG: --read-only-port="10255" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641771 2565 flags.go:64] FLAG: --register-node="true" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641774 2565 flags.go:64] FLAG: --register-schedulable="true" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641776 2565 flags.go:64] FLAG: --register-with-taints="" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641780 2565 flags.go:64] FLAG: --registry-burst="10" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641783 2565 flags.go:64] FLAG: --registry-qps="5" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641786 2565 flags.go:64] FLAG: --reserved-cpus="" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641788 2565 flags.go:64] FLAG: --reserved-memory="" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641792 2565 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641795 2565 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641798 2565 flags.go:64] FLAG: --rotate-certificates="false" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641801 2565 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641804 2565 flags.go:64] FLAG: --runonce="false" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641806 2565 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641809 2565 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641813 2565 flags.go:64] FLAG: --seccomp-default="false" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641816 2565 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641818 2565 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641822 2565 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641825 2565 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641831 2565 flags.go:64] FLAG: --storage-driver-password="root" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641833 2565 flags.go:64] FLAG: --storage-driver-secure="false" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641836 2565 flags.go:64] FLAG: --storage-driver-table="stats" Apr 23 17:52:16.647592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641839 2565 flags.go:64] FLAG: --storage-driver-user="root" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641842 2565 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641845 2565 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641848 2565 flags.go:64] FLAG: --system-cgroups="" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641852 2565 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641857 2565 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641860 2565 flags.go:64] FLAG: --tls-cert-file="" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641863 2565 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641867 2565 flags.go:64] FLAG: --tls-min-version="" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641870 2565 flags.go:64] FLAG: --tls-private-key-file="" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641872 2565 flags.go:64] FLAG: --topology-manager-policy="none" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641875 2565 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641878 2565 flags.go:64] FLAG: --topology-manager-scope="container" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641881 2565 flags.go:64] FLAG: --v="2" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641886 2565 flags.go:64] FLAG: --version="false" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641889 2565 flags.go:64] FLAG: --vmodule="" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641894 2565 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.641897 2565 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642001 2565 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642005 2565 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642008 2565 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642011 2565 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642014 2565 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:16.648273 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642016 2565 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642022 2565 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642026 2565 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642029 2565 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642032 2565 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642035 2565 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642039 2565 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642042 2565 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642045 2565 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642048 2565 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642050 2565 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642054 2565 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642057 2565 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642059 2565 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642062 2565 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642065 2565 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642067 2565 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642070 2565 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642072 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:16.648838 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642075 2565 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642077 2565 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642080 2565 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642083 2565 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642086 2565 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642089 2565 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642092 2565 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642094 2565 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642097 2565 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642100 2565 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642102 2565 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642105 2565 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642109 2565 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642113 2565 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642117 2565 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642120 2565 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642123 2565 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642126 2565 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642129 2565 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642133 2565 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:16.649400 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642135 2565 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642138 2565 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642140 2565 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642143 2565 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642146 2565 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642149 2565 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642152 2565 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642155 2565 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642158 2565 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642160 2565 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642163 2565 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642165 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642168 2565 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642171 2565 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642173 2565 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642175 2565 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642178 2565 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642180 2565 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642183 2565 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:16.649886 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642185 2565 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642188 2565 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642191 2565 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642193 2565 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642196 2565 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642198 2565 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642201 2565 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642204 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642207 2565 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642210 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642213 2565 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642215 2565 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642220 2565 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642222 2565 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642225 2565 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642228 2565 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642231 2565 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642233 2565 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642236 2565 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642239 2565 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:16.650379 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642242 2565 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:16.650919 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642244 2565 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:16.650919 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.642247 2565 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:16.650919 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.643203 2565 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:16.651817 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.651796 2565 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 23 17:52:16.651851 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.651818 2565 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 17:52:16.651881 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651868 2565 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:16.651881 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651874 2565 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:16.651881 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651878 2565 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:16.651881 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651881 2565 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651884 2565 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651887 2565 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651890 2565 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651893 2565 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651896 2565 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651899 2565 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651902 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651905 2565 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651908 2565 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651911 2565 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651914 2565 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651916 2565 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651919 2565 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651921 2565 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651924 2565 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651927 2565 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651929 2565 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651932 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651935 2565 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:16.652008 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651937 2565 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651941 2565 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651946 2565 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651948 2565 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651951 2565 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651969 2565 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651972 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651975 2565 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651979 2565 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651982 2565 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651985 2565 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651988 2565 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651991 2565 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651994 2565 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.651997 2565 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652000 2565 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652002 2565 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652005 2565 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652007 2565 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:16.652522 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652010 2565 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652014 2565 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652016 2565 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652019 2565 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652022 2565 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652025 2565 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652028 2565 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652030 2565 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652033 2565 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652035 2565 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652038 2565 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652041 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652043 2565 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652046 2565 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652049 2565 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652051 2565 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652054 2565 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652056 2565 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652059 2565 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652061 2565 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:16.653055 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652064 2565 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652067 2565 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652070 2565 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652073 2565 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652075 2565 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652078 2565 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652081 2565 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652084 2565 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652087 2565 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652089 2565 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652092 2565 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652094 2565 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652098 2565 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652100 2565 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652102 2565 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652105 2565 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652108 2565 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652111 2565 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652114 2565 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:16.653532 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652116 2565 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652119 2565 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652123 2565 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652127 2565 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652130 2565 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.652137 2565 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652235 2565 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652239 2565 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652242 2565 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652245 2565 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652248 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652251 2565 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652253 2565 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652256 2565 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652259 2565 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:16.654002 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652261 2565 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652271 2565 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652274 2565 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652277 2565 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652280 2565 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652283 2565 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652286 2565 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652288 2565 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652291 2565 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652293 2565 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652296 2565 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652299 2565 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652301 2565 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652304 2565 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652306 2565 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652309 2565 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652312 2565 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652314 2565 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652317 2565 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652320 2565 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:16.654390 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652323 2565 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652327 2565 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652330 2565 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652333 2565 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652336 2565 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652338 2565 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652341 2565 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652344 2565 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652346 2565 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652349 2565 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652351 2565 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652354 2565 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652356 2565 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652359 2565 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652362 2565 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652365 2565 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652368 2565 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652371 2565 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652373 2565 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652376 2565 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:16.654900 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652378 2565 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652382 2565 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652385 2565 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652387 2565 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652390 2565 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652392 2565 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652395 2565 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652397 2565 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652400 2565 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652402 2565 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652404 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652407 2565 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652409 2565 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652412 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652414 2565 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652417 2565 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652419 2565 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652422 2565 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652424 2565 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652427 2565 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:16.655412 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652429 2565 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652432 2565 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652434 2565 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652436 2565 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652439 2565 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652442 2565 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652444 2565 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652448 2565 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652450 2565 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652453 2565 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652455 2565 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652458 2565 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652460 2565 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652462 2565 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652465 2565 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652467 2565 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:16.655901 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:16.652470 2565 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:16.656311 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.652475 2565 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:16.656311 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.653591 2565 server.go:962] "Client rotation is on, will bootstrap in background" Apr 23 17:52:16.656457 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.656443 2565 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 23 17:52:16.657401 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.657390 2565 server.go:1019] "Starting client certificate rotation" Apr 23 17:52:16.657505 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.657488 2565 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:16.657542 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.657528 2565 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:16.690561 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.690533 2565 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:16.695254 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.695230 2565 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:16.713434 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.713416 2565 log.go:25] "Validated CRI v1 runtime API" Apr 23 17:52:16.719614 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.719599 2565 log.go:25] "Validated CRI v1 image API" Apr 23 17:52:16.720900 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.720884 2565 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 17:52:16.727056 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.727033 2565 fs.go:135] Filesystem UUIDs: map[019cea6d-2704-4339-ae2c-4fe7553eb508:/dev/nvme0n1p3 1673be2f-374d-47d9-a118-b833371b5e03:/dev/nvme0n1p4 7B77-95E7:/dev/nvme0n1p2] Apr 23 17:52:16.727151 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.727054 2565 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 23 17:52:16.729177 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.729153 2565 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:52:16.733535 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.733417 2565 manager.go:217] Machine: {Timestamp:2026-04-23 17:52:16.731010498 +0000 UTC m=+0.518749749 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3086504 MemoryCapacity:33164496896 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec26c83db0c28da7f1c330019e304b97 SystemUUID:ec26c83d-b0c2-8da7-f1c3-30019e304b97 BootID:0e3dc43d-51b8-4786-871f-68e90b6c5e2a Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632902656 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582250496 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:3b:c3:a4:af:fd Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:3b:c3:a4:af:fd Speed:0 Mtu:9001} {Name:ovs-system MacAddress:e6:63:8b:69:a2:2c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164496896 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 23 17:52:16.733535 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.733523 2565 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 23 17:52:16.733712 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.733695 2565 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 23 17:52:16.735447 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.735419 2565 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 17:52:16.735622 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.735449 2565 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-131-177.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 17:52:16.735708 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.735631 2565 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 17:52:16.735708 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.735643 2565 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 17:52:16.735708 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.735666 2565 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:16.736415 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.736403 2565 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:16.738271 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.738258 2565 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:16.738402 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.738391 2565 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 23 17:52:16.740703 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.740692 2565 kubelet.go:491] "Attempting to sync node with API server" Apr 23 17:52:16.740769 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.740710 2565 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 17:52:16.740769 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.740727 2565 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 23 17:52:16.740769 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.740743 2565 kubelet.go:397] "Adding apiserver pod source" Apr 23 17:52:16.740769 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.740760 2565 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 17:52:16.741878 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.741864 2565 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:16.741973 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.741887 2565 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:16.747863 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.747840 2565 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 23 17:52:16.750033 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.750011 2565 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 17:52:16.755593 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755574 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 23 17:52:16.755593 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755596 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755610 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755619 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755626 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755635 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755640 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755646 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755654 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755661 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755688 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.755664 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.755698 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 23 17:52:16.755736 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.755712 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:16.756738 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.756727 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 23 17:52:16.756738 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.756738 2565 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 23 17:52:16.760451 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.760437 2565 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 17:52:16.760516 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.760480 2565 server.go:1295] "Started kubelet" Apr 23 17:52:16.760608 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.760568 2565 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 17:52:16.760682 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.760628 2565 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 17:52:16.760736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.760688 2565 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 23 17:52:16.760786 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.760732 2565 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-tfwx8" Apr 23 17:52:16.761259 ip-10-0-131-177 systemd[1]: Started Kubernetes Kubelet. Apr 23 17:52:16.762212 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.762196 2565 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 17:52:16.763175 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.763161 2565 server.go:317] "Adding debug handlers to kubelet server" Apr 23 17:52:16.764035 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.764016 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:16.768768 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.768745 2565 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 23 17:52:16.768860 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.768794 2565 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 23 17:52:16.769619 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.769607 2565 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 17:52:16.770570 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.770539 2565 factory.go:153] Registering CRI-O factory Apr 23 17:52:16.770570 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.770569 2565 factory.go:223] Registration of the crio container factory successfully Apr 23 17:52:16.770667 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.770634 2565 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 23 17:52:16.770667 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.770643 2565 factory.go:55] Registering systemd factory Apr 23 17:52:16.770667 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.770650 2565 factory.go:223] Registration of the systemd container factory successfully Apr 23 17:52:16.770801 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.770674 2565 factory.go:103] Registering Raw factory Apr 23 17:52:16.770801 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.770690 2565 manager.go:1196] Started watching for new ooms in manager Apr 23 17:52:16.770801 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.770780 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:52:16.771137 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.771123 2565 manager.go:319] Starting recovery of all containers Apr 23 17:52:16.772884 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.772861 2565 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 17:52:16.772884 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.772872 2565 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 23 17:52:16.773045 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.772898 2565 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 17:52:16.773102 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.773059 2565 reconstruct.go:97] "Volume reconstruction finished" Apr 23 17:52:16.773102 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.773067 2565 reconciler.go:26] "Reconciler: start to sync state" Apr 23 17:52:16.773341 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.773321 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 23 17:52:16.773449 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.773430 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:16.774151 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.773154 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd5124aa9df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.760449503 +0000 UTC m=+0.548188753,LastTimestamp:2026-04-23 17:52:16.760449503 +0000 UTC m=+0.548188753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:16.780104 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.780088 2565 manager.go:324] Recovery completed Apr 23 17:52:16.783555 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.783531 2565 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/systemd-update-utmp-runlevel.service": inotify_add_watch /sys/fs/cgroup/system.slice/systemd-update-utmp-runlevel.service: no such file or directory Apr 23 17:52:16.786682 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.786670 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:16.790606 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.790590 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:16.790666 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.790620 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:16.790666 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.790634 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:16.791160 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.791142 2565 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 23 17:52:16.791160 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.791159 2565 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 23 17:52:16.791270 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.791178 2565 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:16.792386 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.792315 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd51416d338 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790606648 +0000 UTC m=+0.578345899,LastTimestamp:2026-04-23 17:52:16.790606648 +0000 UTC m=+0.578345899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:16.793583 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.793571 2565 policy_none.go:49] "None policy: Start" Apr 23 17:52:16.793641 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.793590 2565 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 17:52:16.793641 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.793600 2565 state_mem.go:35] "Initializing new in-memory state store" Apr 23 17:52:16.801152 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.801090 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514172640 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790627904 +0000 UTC m=+0.578367155,LastTimestamp:2026-04-23 17:52:16.790627904 +0000 UTC m=+0.578367155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:16.809204 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.809145 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514174d7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790637948 +0000 UTC m=+0.578377199,LastTimestamp:2026-04-23 17:52:16.790637948 +0000 UTC m=+0.578377199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:16.839264 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.839252 2565 manager.go:341] "Starting Device Plugin manager" Apr 23 17:52:16.839364 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.839290 2565 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 17:52:16.839364 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.839305 2565 server.go:85] "Starting device plugin registration server" Apr 23 17:52:16.839577 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.839566 2565 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 17:52:16.839622 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.839581 2565 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 17:52:16.839699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.839679 2565 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 23 17:52:16.839813 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.839773 2565 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 23 17:52:16.839813 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.839783 2565 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 17:52:16.840570 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.840551 2565 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 23 17:52:16.840645 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.840593 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:52:16.860324 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.860250 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd517b68816 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.851404822 +0000 UTC m=+0.639144060,LastTimestamp:2026-04-23 17:52:16.851404822 +0000 UTC m=+0.639144060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:16.904968 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.904878 2565 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 17:52:16.906152 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.906133 2565 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 17:52:16.906214 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.906166 2565 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 17:52:16.906214 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.906192 2565 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 17:52:16.906214 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.906202 2565 kubelet.go:2451] "Starting kubelet main sync loop" Apr 23 17:52:16.906358 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.906242 2565 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 23 17:52:16.914068 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.914045 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:16.940312 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.940282 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:16.941411 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.941396 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:16.941487 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.941426 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:16.941487 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.941436 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:16.941487 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:16.941462 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:16.952208 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.952125 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd51416d338\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd51416d338 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790606648 +0000 UTC m=+0.578345899,LastTimestamp:2026-04-23 17:52:16.941411558 +0000 UTC m=+0.729150809,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:16.958714 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.958689 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:16.958853 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.958770 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514172640\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514172640 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790627904 +0000 UTC m=+0.578367155,LastTimestamp:2026-04-23 17:52:16.941430982 +0000 UTC m=+0.729170233,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:16.960519 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.960463 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514174d7c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514174d7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790637948 +0000 UTC m=+0.578377199,LastTimestamp:2026-04-23 17:52:16.941440155 +0000 UTC m=+0.729179407,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:16.976041 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:16.976016 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Apr 23 17:52:17.007140 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.007114 2565 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal"] Apr 23 17:52:17.007206 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.007187 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:17.008115 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.008100 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:17.008182 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.008131 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:17.008182 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.008145 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:17.009486 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.009473 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:17.009606 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.009590 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.009663 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.009629 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:17.011971 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.011942 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:17.012054 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.011989 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:17.012054 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.012004 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:17.012054 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.011942 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:17.012192 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.012063 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:17.012192 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.012075 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:17.013285 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.013270 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.013338 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.013302 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:17.013985 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.013970 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:17.014047 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.013999 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:17.014047 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.014009 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:17.015184 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.015119 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd51416d338\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd51416d338 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790606648 +0000 UTC m=+0.578345899,LastTimestamp:2026-04-23 17:52:17.008115355 +0000 UTC m=+0.795854606,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.023679 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.023612 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514172640\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514172640 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790627904 +0000 UTC m=+0.578367155,LastTimestamp:2026-04-23 17:52:17.008137747 +0000 UTC m=+0.795877000,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.030190 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.030126 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514174d7c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514174d7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790637948 +0000 UTC m=+0.578377199,LastTimestamp:2026-04-23 17:52:17.008149575 +0000 UTC m=+0.795888825,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.032533 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.032517 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.037211 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.037196 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.038677 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.038618 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd51416d338\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd51416d338 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790606648 +0000 UTC m=+0.578345899,LastTimestamp:2026-04-23 17:52:17.011977206 +0000 UTC m=+0.799716462,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.045948 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.045889 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514172640\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514172640 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790627904 +0000 UTC m=+0.578367155,LastTimestamp:2026-04-23 17:52:17.011996554 +0000 UTC m=+0.799735806,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.054191 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.054133 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514174d7c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514174d7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790637948 +0000 UTC m=+0.578377199,LastTimestamp:2026-04-23 17:52:17.012009552 +0000 UTC m=+0.799748805,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.062256 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.062195 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd51416d338\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd51416d338 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790606648 +0000 UTC m=+0.578345899,LastTimestamp:2026-04-23 17:52:17.012049789 +0000 UTC m=+0.799789040,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.071017 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.070931 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514172640\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514172640 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790627904 +0000 UTC m=+0.578367155,LastTimestamp:2026-04-23 17:52:17.012069735 +0000 UTC m=+0.799808985,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.074543 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.074524 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e3c596a27faede1f97b6bb0972592f6-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal\" (UID: \"7e3c596a27faede1f97b6bb0972592f6\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.074632 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.074556 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/50b2c37a28961f1c8aacb6ad5db58d22-config\") pod \"kube-apiserver-proxy-ip-10-0-131-177.ec2.internal\" (UID: \"50b2c37a28961f1c8aacb6ad5db58d22\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.074632 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.074573 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7e3c596a27faede1f97b6bb0972592f6-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal\" (UID: \"7e3c596a27faede1f97b6bb0972592f6\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.077462 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.077405 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514174d7c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514174d7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790637948 +0000 UTC m=+0.578377199,LastTimestamp:2026-04-23 17:52:17.012079339 +0000 UTC m=+0.799818590,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.086200 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.086142 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd51416d338\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd51416d338 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790606648 +0000 UTC m=+0.578345899,LastTimestamp:2026-04-23 17:52:17.01398446 +0000 UTC m=+0.801723716,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.094294 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.094239 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514172640\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514172640 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790627904 +0000 UTC m=+0.578367155,LastTimestamp:2026-04-23 17:52:17.014004462 +0000 UTC m=+0.801743713,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.102596 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.102528 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514174d7c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514174d7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790637948 +0000 UTC m=+0.578377199,LastTimestamp:2026-04-23 17:52:17.014012752 +0000 UTC m=+0.801752004,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.159744 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.159664 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:17.160697 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.160678 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:17.160790 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.160716 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:17.160790 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.160732 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:17.160790 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.160770 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.168408 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.168271 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd51416d338\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd51416d338 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790606648 +0000 UTC m=+0.578345899,LastTimestamp:2026-04-23 17:52:17.160697752 +0000 UTC m=+0.948437003,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.175507 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.175488 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/50b2c37a28961f1c8aacb6ad5db58d22-config\") pod \"kube-apiserver-proxy-ip-10-0-131-177.ec2.internal\" (UID: \"50b2c37a28961f1c8aacb6ad5db58d22\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.175602 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.175521 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7e3c596a27faede1f97b6bb0972592f6-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal\" (UID: \"7e3c596a27faede1f97b6bb0972592f6\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.175602 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.175539 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e3c596a27faede1f97b6bb0972592f6-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal\" (UID: \"7e3c596a27faede1f97b6bb0972592f6\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.175699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.175614 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/50b2c37a28961f1c8aacb6ad5db58d22-config\") pod \"kube-apiserver-proxy-ip-10-0-131-177.ec2.internal\" (UID: \"50b2c37a28961f1c8aacb6ad5db58d22\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.175699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.175655 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e3c596a27faede1f97b6bb0972592f6-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal\" (UID: \"7e3c596a27faede1f97b6bb0972592f6\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.175699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.175696 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7e3c596a27faede1f97b6bb0972592f6-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal\" (UID: \"7e3c596a27faede1f97b6bb0972592f6\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.177624 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.177607 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.177727 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.177669 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514172640\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514172640 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790627904 +0000 UTC m=+0.578367155,LastTimestamp:2026-04-23 17:52:17.160724209 +0000 UTC m=+0.948463461,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.184498 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.184434 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514174d7c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514174d7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790637948 +0000 UTC m=+0.578377199,LastTimestamp:2026-04-23 17:52:17.160737193 +0000 UTC m=+0.948476444,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.336283 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.336248 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.338357 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.338336 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.386517 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.386487 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Apr 23 17:52:17.578488 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.578456 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:17.579460 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.579443 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:17.579514 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.579477 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:17.579514 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.579490 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:17.579579 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.579518 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.586795 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.586694 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd51416d338\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd51416d338 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790606648 +0000 UTC m=+0.578345899,LastTimestamp:2026-04-23 17:52:17.579461462 +0000 UTC m=+1.367200714,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.595811 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.595790 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:17.595938 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.595875 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-131-177.ec2.internal.18a90dd514172640\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-131-177.ec2.internal.18a90dd514172640 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-131-177.ec2.internal,UID:ip-10-0-131-177.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-131-177.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:16.790627904 +0000 UTC m=+0.578367155,LastTimestamp:2026-04-23 17:52:17.579484734 +0000 UTC m=+1.367223986,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.772093 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.772065 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:17.868706 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:17.868496 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50b2c37a28961f1c8aacb6ad5db58d22.slice/crio-c22565413329b3cefb9c25e8b3b62fd19671f96b2b16c7ed241a5af2bcb20cfa WatchSource:0}: Error finding container c22565413329b3cefb9c25e8b3b62fd19671f96b2b16c7ed241a5af2bcb20cfa: Status 404 returned error can't find the container with id c22565413329b3cefb9c25e8b3b62fd19671f96b2b16c7ed241a5af2bcb20cfa Apr 23 17:52:17.868940 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:52:17.868916 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e3c596a27faede1f97b6bb0972592f6.slice/crio-b9b97f27208f421e4fa829dcd0a9bdf47ce0bf9101c9a9a0466117773288d2e4 WatchSource:0}: Error finding container b9b97f27208f421e4fa829dcd0a9bdf47ce0bf9101c9a9a0466117773288d2e4: Status 404 returned error can't find the container with id b9b97f27208f421e4fa829dcd0a9bdf47ce0bf9101c9a9a0466117773288d2e4 Apr 23 17:52:17.872871 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.872856 2565 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:52:17.882254 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.882180 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd5549c28f5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\",Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:17.873086709 +0000 UTC m=+1.660825947,LastTimestamp:2026-04-23 17:52:17.873086709 +0000 UTC m=+1.660825947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.889139 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:17.889068 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-131-177.ec2.internal.18a90dd5549d1f59 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-131-177.ec2.internal,UID:50b2c37a28961f1c8aacb6ad5db58d22,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\",Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:17.873149785 +0000 UTC m=+1.660889036,LastTimestamp:2026-04-23 17:52:17.873149785 +0000 UTC m=+1.660889036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:17.908967 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.908904 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" event={"ID":"7e3c596a27faede1f97b6bb0972592f6","Type":"ContainerStarted","Data":"b9b97f27208f421e4fa829dcd0a9bdf47ce0bf9101c9a9a0466117773288d2e4"} Apr 23 17:52:17.909834 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:17.909815 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal" event={"ID":"50b2c37a28961f1c8aacb6ad5db58d22","Type":"ContainerStarted","Data":"c22565413329b3cefb9c25e8b3b62fd19671f96b2b16c7ed241a5af2bcb20cfa"} Apr 23 17:52:18.026894 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:18.026861 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:18.193811 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:18.193707 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Apr 23 17:52:18.265154 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:18.265118 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:18.350314 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:18.350273 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:18.396147 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:18.396073 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:18.397296 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:18.397274 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:18.397435 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:18.397309 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:18.397435 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:18.397324 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:18.397435 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:18.397357 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:18.413773 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:18.413741 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:18.479725 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:18.479623 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:18.773180 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:18.773147 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:19.548458 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:19.548374 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-131-177.ec2.internal.18a90dd5b7dd1bfd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-131-177.ec2.internal,UID:50b2c37a28961f1c8aacb6ad5db58d22,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\" in 1.665s (1.665s including waiting). Image size: 488332864 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:19.538287613 +0000 UTC m=+3.326026864,LastTimestamp:2026-04-23 17:52:19.538287613 +0000 UTC m=+3.326026864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:19.556375 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:19.556276 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd5b7e296da openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" in 1.665s (1.665s including waiting). Image size: 468435751 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:19.538646746 +0000 UTC m=+3.326385994,LastTimestamp:2026-04-23 17:52:19.538646746 +0000 UTC m=+3.326385994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:19.620007 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:19.619913 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-131-177.ec2.internal.18a90dd5bbd70309 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-131-177.ec2.internal,UID:50b2c37a28961f1c8aacb6ad5db58d22,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Created,Message:Created container: haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:19.604996873 +0000 UTC m=+3.392736127,LastTimestamp:2026-04-23 17:52:19.604996873 +0000 UTC m=+3.392736127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:19.628606 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:19.628518 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-131-177.ec2.internal.18a90dd5bc37ac75 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-131-177.ec2.internal,UID:50b2c37a28961f1c8aacb6ad5db58d22,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Started,Message:Started container haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:19.611331701 +0000 UTC m=+3.399070952,LastTimestamp:2026-04-23 17:52:19.611331701 +0000 UTC m=+3.399070952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:19.770661 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:19.770630 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:19.804788 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:19.804722 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Apr 23 17:52:19.918271 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:19.918237 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal" event={"ID":"50b2c37a28961f1c8aacb6ad5db58d22","Type":"ContainerStarted","Data":"ae73808d3b3567c4baeee490d59b80198510e9953dd30788793c3b2a520a0247"} Apr 23 17:52:19.918419 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:19.918327 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:19.920328 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:19.920310 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:19.920430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:19.920335 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:19.920430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:19.920345 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:19.920548 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:19.920532 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:20.014212 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.014175 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:20.015109 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.015092 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:20.015175 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.015127 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:20.015175 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.015139 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:20.015175 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.015169 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:20.032908 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:20.032878 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:20.149049 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:20.148974 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd5dbdfb48d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:20.142437517 +0000 UTC m=+3.930176768,LastTimestamp:2026-04-23 17:52:20.142437517 +0000 UTC m=+3.930176768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:20.158645 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:20.158575 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd5dc51a21f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:20.149903903 +0000 UTC m=+3.937643174,LastTimestamp:2026-04-23 17:52:20.149903903 +0000 UTC m=+3.937643174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:20.158748 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:20.158642 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:20.449515 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:20.449431 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:20.771332 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.771304 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:20.882035 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:20.882006 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:20.920653 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.920604 2565 generic.go:358] "Generic (PLEG): container finished" podID="7e3c596a27faede1f97b6bb0972592f6" containerID="27142473f3986902daf3cb5f72d30635db4427a1e95ff38dbf1bcdaa60309cc7" exitCode=0 Apr 23 17:52:20.920784 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.920684 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:20.920784 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.920694 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" event={"ID":"7e3c596a27faede1f97b6bb0972592f6","Type":"ContainerDied","Data":"27142473f3986902daf3cb5f72d30635db4427a1e95ff38dbf1bcdaa60309cc7"} Apr 23 17:52:20.920784 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.920714 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:20.921436 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.921416 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:20.921436 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.921424 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:20.921574 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.921448 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:20.921574 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.921458 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:20.921574 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.921448 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:20.921800 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:20.921781 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:20.922010 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:20.921990 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:20.923560 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:20.922529 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:20.932841 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:20.932773 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd60a898fe0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:20.925321184 +0000 UTC m=+4.713060439,LastTimestamp:2026-04-23 17:52:20.925321184 +0000 UTC m=+4.713060439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:21.042528 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:21.042444 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd610f2ca73 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.032880755 +0000 UTC m=+4.820620015,LastTimestamp:2026-04-23 17:52:21.032880755 +0000 UTC m=+4.820620015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:21.048860 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:21.048783 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd61171c735 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.041202997 +0000 UTC m=+4.828942259,LastTimestamp:2026-04-23 17:52:21.041202997 +0000 UTC m=+4.828942259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:21.567138 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:21.567102 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:21.771931 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:21.771907 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:21.923507 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:21.923437 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/0.log" Apr 23 17:52:21.923817 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:21.923763 2565 generic.go:358] "Generic (PLEG): container finished" podID="7e3c596a27faede1f97b6bb0972592f6" containerID="61d83c9a750013317ec8e069f2bea5f405aba4f286050cf8bcb5318d5d06f85f" exitCode=1 Apr 23 17:52:21.923817 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:21.923794 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" event={"ID":"7e3c596a27faede1f97b6bb0972592f6","Type":"ContainerDied","Data":"61d83c9a750013317ec8e069f2bea5f405aba4f286050cf8bcb5318d5d06f85f"} Apr 23 17:52:21.923892 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:21.923851 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:21.924911 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:21.924898 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:21.924979 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:21.924925 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:21.924979 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:21.924935 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:21.925115 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:21.925104 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:21.925157 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:21.925148 2565 scope.go:117] "RemoveContainer" containerID="61d83c9a750013317ec8e069f2bea5f405aba4f286050cf8bcb5318d5d06f85f" Apr 23 17:52:21.937152 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:21.937078 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd60a898fe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd60a898fe0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:20.925321184 +0000 UTC m=+4.713060439,LastTimestamp:2026-04-23 17:52:21.927044294 +0000 UTC m=+5.714783551,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:22.029541 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:22.029461 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd610f2ca73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd610f2ca73 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.032880755 +0000 UTC m=+4.820620015,LastTimestamp:2026-04-23 17:52:22.022151858 +0000 UTC m=+5.809891108,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:22.039261 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:22.039166 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd61171c735\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd61171c735 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.041202997 +0000 UTC m=+4.828942259,LastTimestamp:2026-04-23 17:52:22.029572707 +0000 UTC m=+5.817311961,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:22.772125 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.772095 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:22.926169 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.926140 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/1.log" Apr 23 17:52:22.926582 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.926510 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/0.log" Apr 23 17:52:22.926825 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.926805 2565 generic.go:358] "Generic (PLEG): container finished" podID="7e3c596a27faede1f97b6bb0972592f6" containerID="398db5e385589124003ee823dedae0b58c01ee856ed35955943c6d334e4bdc13" exitCode=1 Apr 23 17:52:22.926867 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.926839 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" event={"ID":"7e3c596a27faede1f97b6bb0972592f6","Type":"ContainerDied","Data":"398db5e385589124003ee823dedae0b58c01ee856ed35955943c6d334e4bdc13"} Apr 23 17:52:22.926899 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.926876 2565 scope.go:117] "RemoveContainer" containerID="61d83c9a750013317ec8e069f2bea5f405aba4f286050cf8bcb5318d5d06f85f" Apr 23 17:52:22.926899 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.926891 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:22.928052 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.927916 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:22.928052 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.927972 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:22.928052 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.927989 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:22.928309 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:22.928295 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:22.928365 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:22.928355 2565 scope.go:117] "RemoveContainer" containerID="398db5e385589124003ee823dedae0b58c01ee856ed35955943c6d334e4bdc13" Apr 23 17:52:22.928524 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:22.928506 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:52:22.938558 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:22.938486 2565 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6),Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.928465818 +0000 UTC m=+6.716205072,LastTimestamp:2026-04-23 17:52:22.928465818 +0000 UTC m=+6.716205072,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:23.012986 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:23.012945 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Apr 23 17:52:23.233287 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.233216 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:23.234179 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.234163 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:23.234242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.234198 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:23.234242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.234210 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:23.234242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.234237 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:23.259392 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:23.259360 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:23.773021 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.772995 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:23.802688 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:23.802659 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:23.852292 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:23.852264 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:23.929012 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.928994 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/1.log" Apr 23 17:52:23.929383 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.929368 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:23.930066 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.930048 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:23.930147 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.930082 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:23.930147 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.930096 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:23.930345 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:23.930330 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:23.930402 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:23.930387 2565 scope.go:117] "RemoveContainer" containerID="398db5e385589124003ee823dedae0b58c01ee856ed35955943c6d334e4bdc13" Apr 23 17:52:23.930547 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:23.930529 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:52:23.937893 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:23.937825 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6),Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.928465818 +0000 UTC m=+6.716205072,LastTimestamp:2026-04-23 17:52:23.930497231 +0000 UTC m=+7.718236485,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:24.773027 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:24.772996 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:25.773916 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:25.773886 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:26.773657 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:26.773624 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:26.840890 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:26.840862 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:52:27.209751 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:27.209667 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:27.221972 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:27.221925 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:27.772511 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:27.772482 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:28.773638 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:28.773604 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:29.419945 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:29.419915 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:29.659672 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:29.659636 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:29.660944 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:29.660924 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:29.661031 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:29.660975 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:29.661031 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:29.660991 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:29.661031 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:29.661025 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:29.678834 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:29.678776 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:29.773226 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:29.773206 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:30.772189 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:30.772159 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:31.773248 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:31.773221 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:32.368787 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:32.368749 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:32.774083 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:32.774054 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:33.671541 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:33.671507 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:33.773020 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:33.772992 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:34.773120 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:34.773090 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:34.907119 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:34.907096 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:34.907980 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:34.907951 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:34.908050 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:34.907995 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:34.908050 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:34.908008 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:34.908277 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:34.908260 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:34.908337 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:34.908326 2565 scope.go:117] "RemoveContainer" containerID="398db5e385589124003ee823dedae0b58c01ee856ed35955943c6d334e4bdc13" Apr 23 17:52:34.918808 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:34.918704 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd60a898fe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd60a898fe0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:20.925321184 +0000 UTC m=+4.713060439,LastTimestamp:2026-04-23 17:52:34.910376856 +0000 UTC m=+18.698116113,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:35.018809 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:35.018733 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd610f2ca73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd610f2ca73 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.032880755 +0000 UTC m=+4.820620015,LastTimestamp:2026-04-23 17:52:35.011158458 +0000 UTC m=+18.798897719,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:35.027717 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:35.027623 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd61171c735\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd61171c735 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.041202997 +0000 UTC m=+4.828942259,LastTimestamp:2026-04-23 17:52:35.018179535 +0000 UTC m=+18.805918773,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:35.130843 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:35.130813 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:35.774762 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.774729 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:35.945669 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.945644 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/2.log" Apr 23 17:52:35.946031 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.946016 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/1.log" Apr 23 17:52:35.946313 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.946292 2565 generic.go:358] "Generic (PLEG): container finished" podID="7e3c596a27faede1f97b6bb0972592f6" containerID="f89611c592259bcf2587a8cac7fbdd8496259d5085b8870855419fae2c4790dc" exitCode=1 Apr 23 17:52:35.946384 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.946326 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" event={"ID":"7e3c596a27faede1f97b6bb0972592f6","Type":"ContainerDied","Data":"f89611c592259bcf2587a8cac7fbdd8496259d5085b8870855419fae2c4790dc"} Apr 23 17:52:35.946384 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.946356 2565 scope.go:117] "RemoveContainer" containerID="398db5e385589124003ee823dedae0b58c01ee856ed35955943c6d334e4bdc13" Apr 23 17:52:35.946455 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.946438 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:35.947533 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.947325 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:35.947533 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.947352 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:35.947533 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.947366 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:35.947688 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:35.947577 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:35.947688 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:35.947617 2565 scope.go:117] "RemoveContainer" containerID="f89611c592259bcf2587a8cac7fbdd8496259d5085b8870855419fae2c4790dc" Apr 23 17:52:35.947775 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:35.947735 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:52:35.956295 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:35.956229 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6),Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.928465818 +0000 UTC m=+6.716205072,LastTimestamp:2026-04-23 17:52:35.947707349 +0000 UTC m=+19.735446601,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:36.430390 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:36.430362 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:36.679539 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:36.679513 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:36.680520 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:36.680483 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:36.680520 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:36.680515 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:36.680608 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:36.680525 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:36.680608 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:36.680553 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:36.695112 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:36.695088 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:36.771335 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:36.771311 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:36.841820 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:36.841787 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:52:36.948629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:36.948578 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/2.log" Apr 23 17:52:37.254864 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:37.254836 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:37.772351 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:37.772320 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:38.774251 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:38.774220 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:39.773969 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:39.773936 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:40.774328 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:40.774294 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:41.770820 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:41.770789 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:42.774166 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:42.774126 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:43.437654 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:43.437623 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:43.695392 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:43.695318 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:43.697051 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:43.697030 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:43.697157 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:43.697069 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:43.697157 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:43.697084 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:43.697157 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:43.697124 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:43.711098 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:43.711073 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:43.774108 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:43.774088 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:44.770918 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:44.770887 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:45.773771 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:45.773737 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:46.771701 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:46.771675 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:46.842787 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:46.842753 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:52:47.773024 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:47.772993 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:48.770801 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:48.770772 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:49.271836 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:49.271803 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:49.493586 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:49.493558 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:49.773099 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:49.773069 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:49.906634 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:49.906600 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:49.907582 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:49.907537 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:49.907700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:49.907599 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:49.907700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:49.907614 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:49.907874 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:49.907859 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:49.907933 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:49.907922 2565 scope.go:117] "RemoveContainer" containerID="f89611c592259bcf2587a8cac7fbdd8496259d5085b8870855419fae2c4790dc" Apr 23 17:52:49.908110 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:49.908091 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:52:49.915818 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:49.915713 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6),Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.928465818 +0000 UTC m=+6.716205072,LastTimestamp:2026-04-23 17:52:49.908059513 +0000 UTC m=+33.695798769,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:52:50.447131 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:50.447101 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:50.712037 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:50.711937 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:50.713081 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:50.713061 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:50.713182 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:50.713092 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:50.713182 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:50.713102 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:50.713182 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:50.713127 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:50.731724 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:50.731697 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:50.773412 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:50.773390 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:51.773292 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:51.773264 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:52.773579 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:52.773546 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:53.773360 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:53.773324 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:54.772578 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:54.772544 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:55.774222 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:55.774187 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:56.772351 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:56.772311 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:56.843924 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:56.843892 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:52:57.456823 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:57.456786 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:57.732847 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:57.732769 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:57.733739 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:57.733719 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:57.733839 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:57.733750 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:57.733839 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:57.733761 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:57.733839 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:57.733791 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:57.750740 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:57.750716 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:52:57.772048 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:57.772025 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:58.053399 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:52:58.053368 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:58.773178 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:58.773152 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:59.773770 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:52:59.773737 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:00.774008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:00.773977 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:01.510693 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:01.510654 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:53:01.771017 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:01.770925 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:02.773165 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:02.773132 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:03.770917 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:03.770883 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:04.465843 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:04.465807 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:04.751717 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.751688 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:04.752835 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.752817 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:04.752937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.752851 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:04.752937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.752861 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:04.752937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.752886 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:04.769088 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.769063 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:04.769183 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:04.769101 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:04.906439 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.906404 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:04.907301 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.907283 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:04.907382 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.907315 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:04.907382 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.907329 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:04.907619 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:04.907602 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:04.907678 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:04.907668 2565 scope.go:117] "RemoveContainer" containerID="f89611c592259bcf2587a8cac7fbdd8496259d5085b8870855419fae2c4790dc" Apr 23 17:53:04.910263 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:04.910167 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd60a898fe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd60a898fe0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:20.925321184 +0000 UTC m=+4.713060439,LastTimestamp:2026-04-23 17:53:04.908369251 +0000 UTC m=+48.696108504,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:05.017253 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:05.017039 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd610f2ca73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd610f2ca73 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.032880755 +0000 UTC m=+4.820620015,LastTimestamp:2026-04-23 17:53:05.008116301 +0000 UTC m=+48.795855557,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:05.025734 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:05.023908 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd61171c735\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd61171c735 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.041202997 +0000 UTC m=+4.828942259,LastTimestamp:2026-04-23 17:53:05.016293695 +0000 UTC m=+48.804032956,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:05.776224 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.776200 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:05.987643 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.987616 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/3.log" Apr 23 17:53:05.988000 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.987986 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/2.log" Apr 23 17:53:05.988302 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.988278 2565 generic.go:358] "Generic (PLEG): container finished" podID="7e3c596a27faede1f97b6bb0972592f6" containerID="a5dbe2adda68ab3becda920d5328850b56f99703e40f127c701ec98ea7ce5f9a" exitCode=1 Apr 23 17:53:05.988345 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.988318 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" event={"ID":"7e3c596a27faede1f97b6bb0972592f6","Type":"ContainerDied","Data":"a5dbe2adda68ab3becda920d5328850b56f99703e40f127c701ec98ea7ce5f9a"} Apr 23 17:53:05.988377 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.988349 2565 scope.go:117] "RemoveContainer" containerID="f89611c592259bcf2587a8cac7fbdd8496259d5085b8870855419fae2c4790dc" Apr 23 17:53:05.988463 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.988448 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:05.989506 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.989216 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:05.989506 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.989246 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:05.989506 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.989257 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:05.989506 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:05.989469 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:05.989506 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:05.989510 2565 scope.go:117] "RemoveContainer" containerID="a5dbe2adda68ab3becda920d5328850b56f99703e40f127c701ec98ea7ce5f9a" Apr 23 17:53:05.989747 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:05.989629 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:53:05.996591 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:05.996522 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6),Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.928465818 +0000 UTC m=+6.716205072,LastTimestamp:2026-04-23 17:53:05.989601497 +0000 UTC m=+49.777340754,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:06.773649 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:06.773620 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:06.845026 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:06.844983 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:53:06.990483 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:06.990459 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/3.log" Apr 23 17:53:07.771978 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:07.771933 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:08.773815 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:08.773788 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:09.773419 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:09.773391 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:10.773473 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:10.773447 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:11.472920 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:11.472892 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:11.769291 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:11.769270 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:11.770105 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:11.770087 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:11.770185 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:11.770120 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:11.770185 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:11.770130 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:11.770185 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:11.770155 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:11.773796 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:11.773781 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:11.776676 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:11.776658 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:12.766451 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:12.766426 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:13.771819 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:13.771796 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:14.773863 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:14.773836 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:15.774817 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:15.774790 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:16.775668 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:16.775643 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:16.845604 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:16.845567 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:53:17.773637 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:17.773606 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:17.906583 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:17.906537 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:17.908240 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:17.908213 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:17.908349 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:17.908251 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:17.908349 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:17.908265 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:17.908530 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:17.908515 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:17.908592 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:17.908581 2565 scope.go:117] "RemoveContainer" containerID="a5dbe2adda68ab3becda920d5328850b56f99703e40f127c701ec98ea7ce5f9a" Apr 23 17:53:17.908748 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:17.908733 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:53:17.917196 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:17.917112 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6),Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.928465818 +0000 UTC m=+6.716205072,LastTimestamp:2026-04-23 17:53:17.908698804 +0000 UTC m=+61.696438059,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:18.480583 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:18.480542 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:18.773806 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:18.773781 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:18.776871 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:18.776852 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:18.778259 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:18.778243 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:18.778330 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:18.778273 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:18.778330 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:18.778283 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:18.778330 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:18.778312 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:18.794177 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:18.794145 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:19.771867 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:19.771834 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:20.773737 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:20.773710 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:21.772550 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:21.772522 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:22.773887 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:22.773856 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:23.773434 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:23.773404 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:24.773610 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:24.773585 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:25.489489 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:25.489452 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:25.774463 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:25.774393 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:25.794627 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:25.794597 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:25.796550 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:25.796531 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:25.796653 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:25.796563 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:25.796653 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:25.796573 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:25.796653 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:25.796606 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:25.813951 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:25.813925 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:26.770711 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:26.770679 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:26.846391 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:26.846343 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:53:27.773851 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:27.773736 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:28.771655 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:28.771626 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:28.906892 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:28.906849 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:28.907816 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:28.907796 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:28.907909 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:28.907826 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:28.907909 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:28.907836 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:28.908078 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:28.908065 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:28.908127 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:28.908117 2565 scope.go:117] "RemoveContainer" containerID="a5dbe2adda68ab3becda920d5328850b56f99703e40f127c701ec98ea7ce5f9a" Apr 23 17:53:28.908254 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:28.908241 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:53:28.916672 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:28.916592 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6),Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.928465818 +0000 UTC m=+6.716205072,LastTimestamp:2026-04-23 17:53:28.908211866 +0000 UTC m=+72.695951116,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:29.772356 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:29.772321 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:30.202870 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:30.202785 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:53:30.773637 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:30.773611 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:30.828334 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:30.828305 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:53:31.542492 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:31.542455 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:53:31.773877 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:31.773849 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:32.497137 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:32.497101 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:32.774457 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:32.774380 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:32.814886 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:32.814859 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:32.815797 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:32.815780 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:32.815900 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:32.815818 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:32.815900 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:32.815831 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:32.815900 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:32.815866 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:32.833009 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:32.832979 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:33.772510 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:33.772483 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:34.773589 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:34.773553 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:35.772614 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:35.772580 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:36.773186 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:36.773150 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:36.846864 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:36.846818 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:53:37.770873 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:37.770843 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:38.773709 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:38.773682 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:39.505505 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:39.505476 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:39.773718 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:39.773671 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:39.834052 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:39.834021 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:39.834938 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:39.834924 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:39.835008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:39.834969 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:39.835008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:39.834980 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:39.835008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:39.835008 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:39.851415 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:39.851388 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:40.772357 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:40.772330 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:40.906622 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:40.906581 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:40.907534 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:40.907516 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:40.907613 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:40.907548 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:40.907613 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:40.907562 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:40.907802 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:40.907790 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:40.907846 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:40.907838 2565 scope.go:117] "RemoveContainer" containerID="a5dbe2adda68ab3becda920d5328850b56f99703e40f127c701ec98ea7ce5f9a" Apr 23 17:53:40.907988 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:40.907974 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:53:40.917393 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:40.917317 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6),Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.928465818 +0000 UTC m=+6.716205072,LastTimestamp:2026-04-23 17:53:40.907929147 +0000 UTC m=+84.695668398,Count:8,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:41.771658 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:41.771448 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:42.774051 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:42.774018 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:43.773142 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:43.773112 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:44.774411 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:44.774381 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:44.907355 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:44.907321 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:44.908469 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:44.908449 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:44.908583 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:44.908481 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:44.908583 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:44.908497 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:44.908743 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:44.908728 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:45.772576 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:45.772541 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:46.385279 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:46.385253 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:53:46.513171 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:46.513136 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:46.774225 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:46.774198 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:46.846969 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:46.846935 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:53:46.851847 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:46.851832 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:46.852683 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:46.852667 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:46.852782 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:46.852696 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:46.852782 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:46.852706 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:46.852782 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:46.852731 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:46.871008 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:46.870986 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:47.772226 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:47.772199 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:48.774349 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:48.774321 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:49.772601 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:49.772575 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:50.774730 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:50.774703 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:51.771913 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:51.771887 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:52.779518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:52.779491 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:53.521291 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:53.521263 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:53.774448 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:53.774395 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:53.872092 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:53.872049 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:53.872990 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:53.872972 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:53.873064 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:53.873007 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:53.873064 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:53.873017 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:53.873064 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:53.873044 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:53.888431 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:53.888408 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:54.775090 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:54.775060 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:55.782175 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:55.782143 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:55.906488 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:55.906467 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:55.907320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:55.907302 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:55.907396 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:55.907335 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:55.907396 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:55.907345 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:55.907563 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:55.907552 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:55.907607 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:55.907598 2565 scope.go:117] "RemoveContainer" containerID="a5dbe2adda68ab3becda920d5328850b56f99703e40f127c701ec98ea7ce5f9a" Apr 23 17:53:55.917796 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:55.917706 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd60a898fe0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd60a898fe0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:20.925321184 +0000 UTC m=+4.713060439,LastTimestamp:2026-04-23 17:53:55.908294359 +0000 UTC m=+99.696033615,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:56.025313 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:56.025233 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd610f2ca73\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd610f2ca73 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.032880755 +0000 UTC m=+4.820620015,LastTimestamp:2026-04-23 17:53:56.007056581 +0000 UTC m=+99.794795844,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:56.048505 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:56.048431 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd61171c735\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd61171c735 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:21.041202997 +0000 UTC m=+4.828942259,LastTimestamp:2026-04-23 17:53:56.014800553 +0000 UTC m=+99.802539825,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:56.061472 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.061453 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 17:53:56.061838 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.061825 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/3.log" Apr 23 17:53:56.062109 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.062091 2565 generic.go:358] "Generic (PLEG): container finished" podID="7e3c596a27faede1f97b6bb0972592f6" containerID="51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3" exitCode=1 Apr 23 17:53:56.062173 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.062122 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" event={"ID":"7e3c596a27faede1f97b6bb0972592f6","Type":"ContainerDied","Data":"51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3"} Apr 23 17:53:56.062173 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.062147 2565 scope.go:117] "RemoveContainer" containerID="a5dbe2adda68ab3becda920d5328850b56f99703e40f127c701ec98ea7ce5f9a" Apr 23 17:53:56.062281 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.062265 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:56.063168 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.063149 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:56.063254 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.063180 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:56.063254 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.063190 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:56.063441 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:56.063429 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:53:56.063479 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.063475 2565 scope.go:117] "RemoveContainer" containerID="51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3" Apr 23 17:53:56.063615 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:56.063599 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:53:56.080784 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:56.080702 2565 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal.18a90dd681ef1f9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal,UID:7e3c596a27faede1f97b6bb0972592f6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6),Source:EventSource{Component:kubelet,Host:ip-10-0-131-177.ec2.internal,},FirstTimestamp:2026-04-23 17:52:22.928465818 +0000 UTC m=+6.716205072,LastTimestamp:2026-04-23 17:53:56.06357041 +0000 UTC m=+99.851309669,Count:9,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-131-177.ec2.internal,}" Apr 23 17:53:56.775212 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:56.775184 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:56.847780 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:53:56.847751 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:53:57.064949 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:57.064887 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 17:53:57.774507 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:57.774482 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:58.773070 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:58.773043 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:59.774147 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:53:59.774118 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:00.528877 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:00.528836 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:54:00.773713 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:00.773682 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:00.888823 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:00.888751 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:00.889744 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:00.889726 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:00.889842 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:00.889759 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:00.889842 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:00.889774 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:00.889842 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:00.889808 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:00.913260 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:00.913226 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:01.772282 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:01.772247 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:02.357167 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:02.357136 2565 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:54:02.772188 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:02.772164 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:03.774806 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:03.774778 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:04.771360 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:04.771329 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:05.777572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:05.777537 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:06.773207 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:06.773174 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:06.848228 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:06.848202 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:07.539464 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:07.539425 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:54:07.776327 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:07.776293 2565 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-131-177.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:07.913472 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:07.913396 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:07.914941 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:07.914922 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:07.915067 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:07.914976 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:07.915067 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:07.914991 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:07.915067 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:07.915022 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:07.937684 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:07.937657 2565 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-131-177.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:08.135937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:08.135906 2565 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-tfwx8" Apr 23 17:54:08.657659 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:08.657617 2565 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 23 17:54:08.781883 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:08.781855 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:08.801061 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:08.801040 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:08.861709 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:08.861685 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:08.907420 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:08.907400 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:08.908686 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:08.908619 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:08.908686 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:08.908656 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:08.908686 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:08.908669 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:08.908893 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:08.908881 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:08.908940 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:08.908931 2565 scope.go:117] "RemoveContainer" containerID="51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3" Apr 23 17:54:08.909085 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:08.909071 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:54:09.137087 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:09.137050 2565 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-22 17:49:08 +0000 UTC" deadline="2027-12-08 20:18:36.348312433 +0000 UTC" Apr 23 17:54:09.137087 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:09.137081 2565 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="14258h24m27.211234632s" Apr 23 17:54:09.144945 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:09.144926 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:09.145000 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:09.144949 2565 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:09.195933 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:09.195865 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:09.222111 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:09.222093 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:09.281828 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:09.281808 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:09.552822 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:09.552797 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:09.552822 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:09.552822 2565 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:09.808127 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:09.808072 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:09.825784 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:09.825764 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:09.885718 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:09.885700 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:10.164757 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:10.164687 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:10.164757 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:10.164712 2565 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:12.292046 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:12.292013 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:12.311813 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:12.311792 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:12.375609 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:12.375581 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:12.632234 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:12.632153 2565 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:12.632234 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:12.632178 2565 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-131-177.ec2.internal" not found Apr 23 17:54:14.545844 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:14.545812 2565 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:14.938584 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:14.938492 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:14.939552 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:14.939534 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:14.939673 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:14.939565 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:14.939673 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:14.939575 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:14.939673 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:14.939600 2565 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:14.947832 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:14.947814 2565 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:14.947937 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:14.947838 2565 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-131-177.ec2.internal\": node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:14.962770 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:14.962747 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.063243 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:15.063223 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.163654 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:15.163630 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.263906 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:15.263878 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.364449 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:15.364419 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.465044 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:15.465005 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.565638 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:15.565584 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.666209 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:15.666170 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.767002 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:15.766977 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.833656 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:15.833589 2565 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 23 17:54:15.845991 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:15.845968 2565 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:54:15.867170 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:15.867153 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.968010 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:15.967979 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:15.977124 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:15.977106 2565 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-6wd6x" Apr 23 17:54:15.986015 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:15.986000 2565 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-6wd6x" Apr 23 17:54:16.068503 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:16.068477 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:16.168973 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:16.168907 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:16.269648 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:16.269623 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:16.370168 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:16.370139 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:16.470741 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:16.470693 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:16.571426 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:16.571399 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:16.671568 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:16.671544 2565 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:16.772417 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:16.772398 2565 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Apr 23 17:54:16.849299 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:16.849273 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:16.863549 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:16.863511 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:16.986557 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:16.986524 2565 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:49:15 +0000 UTC" deadline="2027-11-04 21:51:22.735089056 +0000 UTC" Apr 23 17:54:16.986557 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:16.986551 2565 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13443h57m5.748541449s" Apr 23 17:54:17.987591 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:17.987534 2565 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:49:15 +0000 UTC" deadline="2028-02-08 20:35:05.938349458 +0000 UTC" Apr 23 17:54:17.987591 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:17.987580 2565 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15746h40m47.95077265s" Apr 23 17:54:19.907275 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:19.907236 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:19.909052 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:19.909035 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:19.909118 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:19.909065 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:19.909118 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:19.909075 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:19.909300 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:19.909288 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:19.909344 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:19.909335 2565 scope.go:117] "RemoveContainer" containerID="51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3" Apr 23 17:54:19.909467 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:19.909453 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:54:21.865024 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:21.864983 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:22.156524 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:22.156450 2565 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:24.971606 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:24.971563 2565 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-131-177.ec2.internal\": node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:26.849715 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:26.849672 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:26.865850 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:26.865819 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:28.336407 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:28.336377 2565 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:31.867040 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:31.867006 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:32.906760 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:32.906720 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:32.908286 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:32.908271 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:32.908342 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:32.908302 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:32.908342 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:32.908314 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:32.908521 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:32.908509 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:32.908561 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:32.908555 2565 scope.go:117] "RemoveContainer" containerID="51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3" Apr 23 17:54:32.908686 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:32.908672 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:54:35.353773 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:35.353738 2565 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-131-177.ec2.internal\": node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:36.849824 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:36.849777 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:36.868141 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:36.868110 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:40.553550 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:40.553525 2565 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:41.869441 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:41.869407 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:43.906805 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:43.906763 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:43.907862 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:43.907844 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:43.907966 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:43.907878 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:43.907966 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:43.907894 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:43.908195 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:43.908179 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:43.908255 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:43.908244 2565 scope.go:117] "RemoveContainer" containerID="51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3" Apr 23 17:54:43.908395 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:43.908378 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:54:45.712012 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:45.711945 2565 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-131-177.ec2.internal\": node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:46.850647 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:46.850599 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:46.870825 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:46.870789 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:51.871430 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:51.871393 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:55.960913 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:55.960869 2565 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-131-177.ec2.internal\": node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:56.850888 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:56.850844 2565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-131-177.ec2.internal\" not found" Apr 23 17:54:56.872148 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:56.872122 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:56.906458 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:56.906432 2565 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:56.908267 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:56.908250 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:56.908353 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:56.908279 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:56.908353 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:56.908290 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:56.908524 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:56.908511 2565 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-131-177.ec2.internal\" not found" node="ip-10-0-131-177.ec2.internal" Apr 23 17:54:56.908570 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:54:56.908561 2565 scope.go:117] "RemoveContainer" containerID="51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3" Apr 23 17:54:56.908690 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:54:56.908676 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:55:00.047388 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.047307 2565 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:55:00.070480 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.070448 2565 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal" Apr 23 17:55:00.086374 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.086348 2565 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:55:00.087189 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.087176 2565 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" Apr 23 17:55:00.101984 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.101952 2565 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:55:00.833154 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.833101 2565 apiserver.go:52] "Watching apiserver" Apr 23 17:55:00.842199 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.842163 2565 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 23 17:55:00.842554 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.842526 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/konnectivity-agent-brjd8","kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd","openshift-dns/node-resolver-gj4fr","openshift-image-registry/node-ca-q7b2g","openshift-multus/multus-additional-cni-plugins-gw5sh","openshift-multus/multus-rtfpq","openshift-network-diagnostics/network-check-target-7n2tw","openshift-cluster-node-tuning-operator/tuned-j46jf","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal","openshift-multus/network-metrics-daemon-4fq2j","openshift-network-operator/iptables-alerter-nhffk","openshift-ovn-kubernetes/ovnkube-node-dghxj"] Apr 23 17:55:00.846812 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.846788 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:00.848882 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.848865 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:00.849020 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:00.848916 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:00.850859 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.850838 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:00.850977 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:00.850919 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:00.852607 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.852164 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 23 17:55:00.852607 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.852366 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.852607 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.852376 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.852607 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.852383 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4x9xz\"" Apr 23 17:55:00.852763 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.852752 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.852866 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.852852 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:00.855007 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.854993 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.856832 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.856816 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.856902 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.856874 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.857097 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.857082 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:00.859191 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.859172 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.861268 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.861247 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.863466 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.863446 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:00.865775 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.865745 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.865875 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.865787 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 23 17:55:00.866164 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.866149 2565 scope.go:117] "RemoveContainer" containerID="51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3" Apr 23 17:55:00.866302 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:00.866281 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:55:00.866395 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.866351 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.866601 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.866583 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-vlpfs\"" Apr 23 17:55:00.866678 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.866618 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 23 17:55:00.866678 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.866629 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-rkhpr\"" Apr 23 17:55:00.868088 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.868019 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 23 17:55:00.868088 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.868048 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.868245 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.868144 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.868245 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.868229 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-vs95g\"" Apr 23 17:55:00.868353 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.868318 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.868353 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.868318 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.868445 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.868351 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 23 17:55:00.868445 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.868368 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.868445 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.868369 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 23 17:55:00.873672 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.873652 2565 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 17:55:00.891147 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891116 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4fe60531-7273-4ea9-b33c-0e4c909f6075-tmp-dir\") pod \"node-resolver-gj4fr\" (UID: \"4fe60531-7273-4ea9-b33c-0e4c909f6075\") " pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:00.891245 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891159 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e07a9b3c-f646-4dfc-bbb1-523478399c03-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.891245 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891185 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e07a9b3c-f646-4dfc-bbb1-523478399c03-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.891245 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891212 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-run-netns\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.891245 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891236 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-ovn-node-metrics-cert\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.891415 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891260 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-kubelet\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.891415 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891304 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-var-lib-cni-bin\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.891415 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891326 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-sysctl-d\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.891415 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891344 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-var-lib-kubelet\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.891415 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891361 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58bsm\" (UniqueName: \"kubernetes.io/projected/31ce5a0e-448c-4e25-9118-102049e60bf2-kube-api-access-58bsm\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.891415 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891379 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-device-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.891415 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891394 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.891415 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891409 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-hostroot\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891429 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-run-multus-certs\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891445 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-node-log\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891480 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-cnibin\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891512 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-run-netns\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891540 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-sysconfig\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891563 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-run\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891589 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2cwt\" (UniqueName: \"kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt\") pod \"network-check-target-7n2tw\" (UID: \"133d891d-d4a9-44a1-ac6f-7a963f5616fe\") " pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891621 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99ht\" (UniqueName: \"kubernetes.io/projected/e07a9b3c-f646-4dfc-bbb1-523478399c03-kube-api-access-g99ht\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891647 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-modprobe-d\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891668 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-sys\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891693 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4khpt\" (UniqueName: \"kubernetes.io/projected/0174e24b-fc3a-4f0a-8388-9b31e5a92647-kube-api-access-4khpt\") pod \"iptables-alerter-nhffk\" (UID: \"0174e24b-fc3a-4f0a-8388-9b31e5a92647\") " pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891719 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-cni-bin\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891742 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcxbg\" (UniqueName: \"kubernetes.io/projected/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-kube-api-access-jcxbg\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891769 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-system-cni-dir\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.891788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891784 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-sysctl-conf\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891797 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-tuned\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891813 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bv8l\" (UniqueName: \"kubernetes.io/projected/ae6fbd87-39e0-4719-affc-7aa3893a0f78-kube-api-access-2bv8l\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891827 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-run-ovn\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891858 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-log-socket\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891887 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-systemd\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891918 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891951 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-socket-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.891996 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-ovnkube-config\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892018 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-kubernetes\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892043 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-cnibin\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892067 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb-konnectivity-ca\") pod \"konnectivity-agent-brjd8\" (UID: \"2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb\") " pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892092 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-etc-openvswitch\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892117 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/31e0ba61-35be-422e-9b2b-b9c49a736615-cni-binary-copy\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892141 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-var-lib-kubelet\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892167 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrdf\" (UniqueName: \"kubernetes.io/projected/4fe60531-7273-4ea9-b33c-0e4c909f6075-kube-api-access-dqrdf\") pod \"node-resolver-gj4fr\" (UID: \"4fe60531-7273-4ea9-b33c-0e4c909f6075\") " pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:00.892385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892192 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e07a9b3c-f646-4dfc-bbb1-523478399c03-cni-binary-copy\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892216 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-slash\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892239 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-env-overrides\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892280 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-cni-dir\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892309 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/31ce5a0e-448c-4e25-9118-102049e60bf2-tmp\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892336 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/0174e24b-fc3a-4f0a-8388-9b31e5a92647-iptables-alerter-script\") pod \"iptables-alerter-nhffk\" (UID: \"0174e24b-fc3a-4f0a-8388-9b31e5a92647\") " pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892363 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-kubelet-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892386 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-run-ovn-kubernetes\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892411 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-os-release\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892440 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-var-lib-cni-multus\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892464 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-lib-modules\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892481 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-host\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892517 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppjf9\" (UniqueName: \"kubernetes.io/projected/ab8d91e1-def0-4ec9-93d5-476175cef3cd-kube-api-access-ppjf9\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892550 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-system-cni-dir\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892581 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-daemon-config\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892616 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdzrx\" (UniqueName: \"kubernetes.io/projected/31e0ba61-35be-422e-9b2b-b9c49a736615-kube-api-access-jdzrx\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.893003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892644 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0174e24b-fc3a-4f0a-8388-9b31e5a92647-host-slash\") pod \"iptables-alerter-nhffk\" (UID: \"0174e24b-fc3a-4f0a-8388-9b31e5a92647\") " pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892667 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff40def5-69f9-4dad-aa0e-540f9bc631f0-host\") pod \"node-ca-q7b2g\" (UID: \"ff40def5-69f9-4dad-aa0e-540f9bc631f0\") " pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892703 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-registration-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892730 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-etc-kubernetes\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892769 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxvlf\" (UniqueName: \"kubernetes.io/projected/ff40def5-69f9-4dad-aa0e-540f9bc631f0-kube-api-access-dxvlf\") pod \"node-ca-q7b2g\" (UID: \"ff40def5-69f9-4dad-aa0e-540f9bc631f0\") " pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892799 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-etc-selinux\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892831 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-var-lib-openvswitch\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892857 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-run-openvswitch\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892883 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-run-k8s-cni-cncf-io\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892908 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4fe60531-7273-4ea9-b33c-0e4c909f6075-hosts-file\") pod \"node-resolver-gj4fr\" (UID: \"4fe60531-7273-4ea9-b33c-0e4c909f6075\") " pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892932 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ff40def5-69f9-4dad-aa0e-540f9bc631f0-serviceca\") pod \"node-ca-q7b2g\" (UID: \"ff40def5-69f9-4dad-aa0e-540f9bc631f0\") " pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.892977 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-os-release\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.893002 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb-agent-certs\") pod \"konnectivity-agent-brjd8\" (UID: \"2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb\") " pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.893026 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-sys-fs\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.893049 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-run-systemd\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.893066 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-cni-netd\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.893629 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.893080 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-socket-dir-parent\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.894121 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.893101 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-conf-dir\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.894121 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.893122 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-systemd-units\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.894121 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.893152 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.894121 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.893179 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-ovnkube-script-lib\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.894606 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.894447 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-2224b\"" Apr 23 17:55:00.894606 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.894475 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-97zm5\"" Apr 23 17:55:00.894606 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.894455 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 23 17:55:00.894606 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.894510 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.894804 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.894729 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-9wk7z\"" Apr 23 17:55:00.894850 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.894824 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.895218 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.895194 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 23 17:55:00.895320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.895266 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 23 17:55:00.895635 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.895619 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 23 17:55:00.895757 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.895691 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 23 17:55:00.895757 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.895747 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.895902 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.895786 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.895902 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.895886 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-wkbd6\"" Apr 23 17:55:00.896509 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.896476 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 23 17:55:00.896579 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.896529 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-72dm2\"" Apr 23 17:55:00.904269 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.904239 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 23 17:55:00.993993 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.993938 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-kubernetes\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.993993 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.993990 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-cnibin\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994015 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb-konnectivity-ca\") pod \"konnectivity-agent-brjd8\" (UID: \"2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb\") " pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994040 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-etc-openvswitch\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994064 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/31e0ba61-35be-422e-9b2b-b9c49a736615-cni-binary-copy\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994069 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-cnibin\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994088 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-var-lib-kubelet\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994103 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-etc-openvswitch\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994122 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrdf\" (UniqueName: \"kubernetes.io/projected/4fe60531-7273-4ea9-b33c-0e4c909f6075-kube-api-access-dqrdf\") pod \"node-resolver-gj4fr\" (UID: \"4fe60531-7273-4ea9-b33c-0e4c909f6075\") " pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994128 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-var-lib-kubelet\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994073 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-kubernetes\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994152 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e07a9b3c-f646-4dfc-bbb1-523478399c03-cni-binary-copy\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994174 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-slash\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994189 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-env-overrides\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.994198 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994204 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-cni-dir\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994229 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/31ce5a0e-448c-4e25-9118-102049e60bf2-tmp\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994251 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-slash\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994258 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/0174e24b-fc3a-4f0a-8388-9b31e5a92647-iptables-alerter-script\") pod \"iptables-alerter-nhffk\" (UID: \"0174e24b-fc3a-4f0a-8388-9b31e5a92647\") " pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994319 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-cni-dir\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994369 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-kubelet-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994400 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-run-ovn-kubernetes\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994424 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-os-release\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994478 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-kubelet-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994506 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-os-release\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994529 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-var-lib-cni-multus\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994552 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-run-ovn-kubernetes\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994557 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-lib-modules\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994582 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-host\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994588 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-var-lib-cni-multus\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994700 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-lib-modules\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994712 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-host\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994732 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ppjf9\" (UniqueName: \"kubernetes.io/projected/ab8d91e1-def0-4ec9-93d5-476175cef3cd-kube-api-access-ppjf9\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:00.994881 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994760 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-system-cni-dir\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994787 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-daemon-config\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994796 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/31e0ba61-35be-422e-9b2b-b9c49a736615-cni-binary-copy\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994811 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jdzrx\" (UniqueName: \"kubernetes.io/projected/31e0ba61-35be-422e-9b2b-b9c49a736615-kube-api-access-jdzrx\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994812 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/0174e24b-fc3a-4f0a-8388-9b31e5a92647-iptables-alerter-script\") pod \"iptables-alerter-nhffk\" (UID: \"0174e24b-fc3a-4f0a-8388-9b31e5a92647\") " pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994819 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e07a9b3c-f646-4dfc-bbb1-523478399c03-cni-binary-copy\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994833 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0174e24b-fc3a-4f0a-8388-9b31e5a92647-host-slash\") pod \"iptables-alerter-nhffk\" (UID: \"0174e24b-fc3a-4f0a-8388-9b31e5a92647\") " pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994873 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0174e24b-fc3a-4f0a-8388-9b31e5a92647-host-slash\") pod \"iptables-alerter-nhffk\" (UID: \"0174e24b-fc3a-4f0a-8388-9b31e5a92647\") " pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994870 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-system-cni-dir\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994893 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff40def5-69f9-4dad-aa0e-540f9bc631f0-host\") pod \"node-ca-q7b2g\" (UID: \"ff40def5-69f9-4dad-aa0e-540f9bc631f0\") " pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994919 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-registration-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994935 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-etc-kubernetes\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994950 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dxvlf\" (UniqueName: \"kubernetes.io/projected/ff40def5-69f9-4dad-aa0e-540f9bc631f0-kube-api-access-dxvlf\") pod \"node-ca-q7b2g\" (UID: \"ff40def5-69f9-4dad-aa0e-540f9bc631f0\") " pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.994999 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-etc-selinux\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995029 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-var-lib-openvswitch\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995030 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-etc-kubernetes\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995038 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-env-overrides\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.995726 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995048 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-run-openvswitch\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995070 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-run-k8s-cni-cncf-io\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995084 2565 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995095 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-registration-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995101 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4fe60531-7273-4ea9-b33c-0e4c909f6075-hosts-file\") pod \"node-resolver-gj4fr\" (UID: \"4fe60531-7273-4ea9-b33c-0e4c909f6075\") " pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995139 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-run-openvswitch\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995115 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff40def5-69f9-4dad-aa0e-540f9bc631f0-host\") pod \"node-ca-q7b2g\" (UID: \"ff40def5-69f9-4dad-aa0e-540f9bc631f0\") " pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995154 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ff40def5-69f9-4dad-aa0e-540f9bc631f0-serviceca\") pod \"node-ca-q7b2g\" (UID: \"ff40def5-69f9-4dad-aa0e-540f9bc631f0\") " pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995114 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-etc-selinux\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995160 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-var-lib-openvswitch\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995155 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-run-k8s-cni-cncf-io\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995180 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-os-release\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995192 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4fe60531-7273-4ea9-b33c-0e4c909f6075-hosts-file\") pod \"node-resolver-gj4fr\" (UID: \"4fe60531-7273-4ea9-b33c-0e4c909f6075\") " pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995205 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb-agent-certs\") pod \"konnectivity-agent-brjd8\" (UID: \"2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb\") " pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995232 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-sys-fs\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995254 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-run-systemd\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995276 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-cni-netd\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995292 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-os-release\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.996518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995296 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-sys-fs\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995307 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-socket-dir-parent\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995321 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-daemon-config\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995331 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-run-systemd\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995331 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-conf-dir\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995337 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-cni-netd\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995363 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-systemd-units\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995377 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-socket-dir-parent\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995389 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995416 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-ovnkube-script-lib\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995442 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4fe60531-7273-4ea9-b33c-0e4c909f6075-tmp-dir\") pod \"node-resolver-gj4fr\" (UID: \"4fe60531-7273-4ea9-b33c-0e4c909f6075\") " pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995448 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-systemd-units\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995417 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-multus-conf-dir\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995467 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e07a9b3c-f646-4dfc-bbb1-523478399c03-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995469 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb-konnectivity-ca\") pod \"konnectivity-agent-brjd8\" (UID: \"2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb\") " pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995492 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e07a9b3c-f646-4dfc-bbb1-523478399c03-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995518 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-run-netns\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.997298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995538 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ff40def5-69f9-4dad-aa0e-540f9bc631f0-serviceca\") pod \"node-ca-q7b2g\" (UID: \"ff40def5-69f9-4dad-aa0e-540f9bc631f0\") " pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995546 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-ovn-node-metrics-cert\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995571 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-kubelet\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995595 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-var-lib-cni-bin\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995616 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-sysctl-d\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995641 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-var-lib-kubelet\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995667 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-58bsm\" (UniqueName: \"kubernetes.io/projected/31ce5a0e-448c-4e25-9118-102049e60bf2-kube-api-access-58bsm\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995695 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-device-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995721 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995744 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-hostroot\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995769 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-run-multus-certs\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995805 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-node-log\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995830 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-cnibin\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995858 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-run-netns\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995885 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-sysconfig\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995911 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-run\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995935 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cwt\" (UniqueName: \"kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt\") pod \"network-check-target-7n2tw\" (UID: \"133d891d-d4a9-44a1-ac6f-7a963f5616fe\") " pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:00.998133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995974 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e07a9b3c-f646-4dfc-bbb1-523478399c03-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995983 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g99ht\" (UniqueName: \"kubernetes.io/projected/e07a9b3c-f646-4dfc-bbb1-523478399c03-kube-api-access-g99ht\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995769 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-var-lib-cni-bin\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996012 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-modprobe-d\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996039 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-sys\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996042 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-ovnkube-script-lib\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995463 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996061 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-sysconfig\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996078 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4khpt\" (UniqueName: \"kubernetes.io/projected/0174e24b-fc3a-4f0a-8388-9b31e5a92647-kube-api-access-4khpt\") pod \"iptables-alerter-nhffk\" (UID: \"0174e24b-fc3a-4f0a-8388-9b31e5a92647\") " pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996095 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-sysctl-d\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996105 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-run\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996108 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-cni-bin\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996138 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-var-lib-kubelet\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996139 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jcxbg\" (UniqueName: \"kubernetes.io/projected/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-kube-api-access-jcxbg\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996174 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-system-cni-dir\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996209 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4fe60531-7273-4ea9-b33c-0e4c909f6075-tmp-dir\") pod \"node-resolver-gj4fr\" (UID: \"4fe60531-7273-4ea9-b33c-0e4c909f6075\") " pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996219 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-sysctl-conf\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.998912 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996241 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-tuned\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996263 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2bv8l\" (UniqueName: \"kubernetes.io/projected/ae6fbd87-39e0-4719-affc-7aa3893a0f78-kube-api-access-2bv8l\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996269 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-node-log\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996282 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-run-ovn\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996304 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-log-socket\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996307 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-run-netns\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.995802 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-run-netns\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996326 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-systemd\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996348 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996350 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-kubelet\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996376 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-socket-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996382 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-modprobe-d\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996011 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e07a9b3c-f646-4dfc-bbb1-523478399c03-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996400 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-ovnkube-config\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:00.996430 2565 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996482 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-sys\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:00.996497 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs podName:ab8d91e1-def0-4ec9-93d5-476175cef3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.496479896 +0000 UTC m=+165.284219162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs") pod "network-metrics-daemon-4fq2j" (UID: "ab8d91e1-def0-4ec9-93d5-476175cef3cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:00.999477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996511 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-socket-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996647 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-run-ovn\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996687 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-log-socket\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996304 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-cnibin\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996750 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-systemd\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996794 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-hostroot\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996819 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e07a9b3c-f646-4dfc-bbb1-523478399c03-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996875 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/ae6fbd87-39e0-4719-affc-7aa3893a0f78-device-dir\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996923 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-system-cni-dir\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996977 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/31e0ba61-35be-422e-9b2b-b9c49a736615-host-run-multus-certs\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.996994 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-ovnkube-config\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.997033 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-sysctl-conf\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.997050 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-host-cni-bin\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.999060 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/31ce5a0e-448c-4e25-9118-102049e60bf2-tmp\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.999116 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/31ce5a0e-448c-4e25-9118-102049e60bf2-etc-tuned\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.999280 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-ovn-node-metrics-cert\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:00.999996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:00.999315 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb-agent-certs\") pod \"konnectivity-agent-brjd8\" (UID: \"2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb\") " pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:01.005644 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.005616 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrdf\" (UniqueName: \"kubernetes.io/projected/4fe60531-7273-4ea9-b33c-0e4c909f6075-kube-api-access-dqrdf\") pod \"node-resolver-gj4fr\" (UID: \"4fe60531-7273-4ea9-b33c-0e4c909f6075\") " pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:01.005877 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.005860 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdzrx\" (UniqueName: \"kubernetes.io/projected/31e0ba61-35be-422e-9b2b-b9c49a736615-kube-api-access-jdzrx\") pod \"multus-rtfpq\" (UID: \"31e0ba61-35be-422e-9b2b-b9c49a736615\") " pod="openshift-multus/multus-rtfpq" Apr 23 17:55:01.009174 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.009150 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxvlf\" (UniqueName: \"kubernetes.io/projected/ff40def5-69f9-4dad-aa0e-540f9bc631f0-kube-api-access-dxvlf\") pod \"node-ca-q7b2g\" (UID: \"ff40def5-69f9-4dad-aa0e-540f9bc631f0\") " pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:01.009943 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.009912 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppjf9\" (UniqueName: \"kubernetes.io/projected/ab8d91e1-def0-4ec9-93d5-476175cef3cd-kube-api-access-ppjf9\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:01.017677 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.017650 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:01.017677 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.017670 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:01.017677 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.017680 2565 projected.go:194] Error preparing data for projected volume kube-api-access-p2cwt for pod openshift-network-diagnostics/network-check-target-7n2tw: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:01.017918 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.017730 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt podName:133d891d-d4a9-44a1-ac6f-7a963f5616fe nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.517713931 +0000 UTC m=+165.305453172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p2cwt" (UniqueName: "kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt") pod "network-check-target-7n2tw" (UID: "133d891d-d4a9-44a1-ac6f-7a963f5616fe") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:01.019469 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.019447 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bv8l\" (UniqueName: \"kubernetes.io/projected/ae6fbd87-39e0-4719-affc-7aa3893a0f78-kube-api-access-2bv8l\") pod \"aws-ebs-csi-driver-node-nxfcd\" (UID: \"ae6fbd87-39e0-4719-affc-7aa3893a0f78\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:01.019708 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.019687 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-58bsm\" (UniqueName: \"kubernetes.io/projected/31ce5a0e-448c-4e25-9118-102049e60bf2-kube-api-access-58bsm\") pod \"tuned-j46jf\" (UID: \"31ce5a0e-448c-4e25-9118-102049e60bf2\") " pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:01.019834 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.019802 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcxbg\" (UniqueName: \"kubernetes.io/projected/f6bc6d36-30b4-4f0e-8a4e-46b934b798a5-kube-api-access-jcxbg\") pod \"ovnkube-node-dghxj\" (UID: \"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:01.022863 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.022843 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4khpt\" (UniqueName: \"kubernetes.io/projected/0174e24b-fc3a-4f0a-8388-9b31e5a92647-kube-api-access-4khpt\") pod \"iptables-alerter-nhffk\" (UID: \"0174e24b-fc3a-4f0a-8388-9b31e5a92647\") " pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:01.028028 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.028008 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g99ht\" (UniqueName: \"kubernetes.io/projected/e07a9b3c-f646-4dfc-bbb1-523478399c03-kube-api-access-g99ht\") pod \"multus-additional-cni-plugins-gw5sh\" (UID: \"e07a9b3c-f646-4dfc-bbb1-523478399c03\") " pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:01.105193 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.105099 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-131-177.ec2.internal" podStartSLOduration=1.105086807 podStartE2EDuration="1.105086807s" podCreationTimestamp="2026-04-23 17:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:01.104877333 +0000 UTC m=+164.892616593" watchObservedRunningTime="2026-04-23 17:55:01.105086807 +0000 UTC m=+164.892826044" Apr 23 17:55:01.154383 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.154356 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q7b2g" Apr 23 17:55:01.161605 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.161577 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rtfpq" Apr 23 17:55:01.161974 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:01.161923 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff40def5_69f9_4dad_aa0e_540f9bc631f0.slice/crio-5a35f04c7ae8d3d4a8eea8d1c003645226fa7d24ba174cf278463bc997c549d4 WatchSource:0}: Error finding container 5a35f04c7ae8d3d4a8eea8d1c003645226fa7d24ba174cf278463bc997c549d4: Status 404 returned error can't find the container with id 5a35f04c7ae8d3d4a8eea8d1c003645226fa7d24ba174cf278463bc997c549d4 Apr 23 17:55:01.167263 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.167121 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gj4fr" Apr 23 17:55:01.167868 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:01.167730 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e0ba61_35be_422e_9b2b_b9c49a736615.slice/crio-18cf610f1f0176ba56a6c694e96d621938ed6df1aa1501d502fc40a8fdd03eca WatchSource:0}: Error finding container 18cf610f1f0176ba56a6c694e96d621938ed6df1aa1501d502fc40a8fdd03eca: Status 404 returned error can't find the container with id 18cf610f1f0176ba56a6c694e96d621938ed6df1aa1501d502fc40a8fdd03eca Apr 23 17:55:01.172445 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.172423 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" Apr 23 17:55:01.172879 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:01.172814 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fe60531_7273_4ea9_b33c_0e4c909f6075.slice/crio-e1632f61ff375006f6dfd7bb7832ebcc5ab9852ebaab6e07ce0e5d3afe761268 WatchSource:0}: Error finding container e1632f61ff375006f6dfd7bb7832ebcc5ab9852ebaab6e07ce0e5d3afe761268: Status 404 returned error can't find the container with id e1632f61ff375006f6dfd7bb7832ebcc5ab9852ebaab6e07ce0e5d3afe761268 Apr 23 17:55:01.177438 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.177421 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:01.178583 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:01.178560 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode07a9b3c_f646_4dfc_bbb1_523478399c03.slice/crio-7d5bef2fdc6e23e61bca04b2499c22ac3eb0f91de90b70ec718709adf0426869 WatchSource:0}: Error finding container 7d5bef2fdc6e23e61bca04b2499c22ac3eb0f91de90b70ec718709adf0426869: Status 404 returned error can't find the container with id 7d5bef2fdc6e23e61bca04b2499c22ac3eb0f91de90b70ec718709adf0426869 Apr 23 17:55:01.183682 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.183662 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" Apr 23 17:55:01.184141 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:01.184061 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a5d03ae_4e40_4fa1_a3a7_8974e5fbabbb.slice/crio-362b65d5a4c897692cef5a4b9ab7bb52a0ef9e456a07db921acfd086936a9b28 WatchSource:0}: Error finding container 362b65d5a4c897692cef5a4b9ab7bb52a0ef9e456a07db921acfd086936a9b28: Status 404 returned error can't find the container with id 362b65d5a4c897692cef5a4b9ab7bb52a0ef9e456a07db921acfd086936a9b28 Apr 23 17:55:01.187817 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.187801 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-j46jf" Apr 23 17:55:01.189453 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:01.189431 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae6fbd87_39e0_4719_affc_7aa3893a0f78.slice/crio-8562e2ec6926acfcd09d5c43ec2f5b1184be8e63d55381af470bd061aa96fd77 WatchSource:0}: Error finding container 8562e2ec6926acfcd09d5c43ec2f5b1184be8e63d55381af470bd061aa96fd77: Status 404 returned error can't find the container with id 8562e2ec6926acfcd09d5c43ec2f5b1184be8e63d55381af470bd061aa96fd77 Apr 23 17:55:01.193733 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.193716 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-nhffk" Apr 23 17:55:01.194320 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:01.194302 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31ce5a0e_448c_4e25_9118_102049e60bf2.slice/crio-ea9b0005e74c29ced06459bc2c2b787d24ebf09c4f7bb6ce48af7a1d0333300c WatchSource:0}: Error finding container ea9b0005e74c29ced06459bc2c2b787d24ebf09c4f7bb6ce48af7a1d0333300c: Status 404 returned error can't find the container with id ea9b0005e74c29ced06459bc2c2b787d24ebf09c4f7bb6ce48af7a1d0333300c Apr 23 17:55:01.200637 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.200619 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:01.201165 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:01.201144 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0174e24b_fc3a_4f0a_8388_9b31e5a92647.slice/crio-75c12eaa00038d0fac9facb4ec1dbfc5027eb35cc7949596d26c4ce40b5f1472 WatchSource:0}: Error finding container 75c12eaa00038d0fac9facb4ec1dbfc5027eb35cc7949596d26c4ce40b5f1472: Status 404 returned error can't find the container with id 75c12eaa00038d0fac9facb4ec1dbfc5027eb35cc7949596d26c4ce40b5f1472 Apr 23 17:55:01.206854 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:01.206833 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6bc6d36_30b4_4f0e_8a4e_46b934b798a5.slice/crio-50463c19c21a6d7933173feea7c23eda5ed6f236cbd748068b7b49319019f7be WatchSource:0}: Error finding container 50463c19c21a6d7933173feea7c23eda5ed6f236cbd748068b7b49319019f7be: Status 404 returned error can't find the container with id 50463c19c21a6d7933173feea7c23eda5ed6f236cbd748068b7b49319019f7be Apr 23 17:55:01.498837 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.498798 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:01.499001 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.498947 2565 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:01.499047 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.499037 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs podName:ab8d91e1-def0-4ec9-93d5-476175cef3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:02.499018508 +0000 UTC m=+166.286757750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs") pod "network-metrics-daemon-4fq2j" (UID: "ab8d91e1-def0-4ec9-93d5-476175cef3cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:01.599420 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:01.599376 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cwt\" (UniqueName: \"kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt\") pod \"network-check-target-7n2tw\" (UID: \"133d891d-d4a9-44a1-ac6f-7a963f5616fe\") " pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:01.599603 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.599556 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:01.599603 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.599585 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:01.599603 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.599601 2565 projected.go:194] Error preparing data for projected volume kube-api-access-p2cwt for pod openshift-network-diagnostics/network-check-target-7n2tw: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:01.599771 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.599666 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt podName:133d891d-d4a9-44a1-ac6f-7a963f5616fe nodeName:}" failed. No retries permitted until 2026-04-23 17:55:02.599647437 +0000 UTC m=+166.387386688 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p2cwt" (UniqueName: "kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt") pod "network-check-target-7n2tw" (UID: "133d891d-d4a9-44a1-ac6f-7a963f5616fe") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:01.873606 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:01.873506 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:02.153299 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.153218 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-nhffk" event={"ID":"0174e24b-fc3a-4f0a-8388-9b31e5a92647","Type":"ContainerStarted","Data":"75c12eaa00038d0fac9facb4ec1dbfc5027eb35cc7949596d26c4ce40b5f1472"} Apr 23 17:55:02.162779 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.162742 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-brjd8" event={"ID":"2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb","Type":"ContainerStarted","Data":"362b65d5a4c897692cef5a4b9ab7bb52a0ef9e456a07db921acfd086936a9b28"} Apr 23 17:55:02.168821 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.168792 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" event={"ID":"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5","Type":"ContainerStarted","Data":"50463c19c21a6d7933173feea7c23eda5ed6f236cbd748068b7b49319019f7be"} Apr 23 17:55:02.171350 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.171290 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-j46jf" event={"ID":"31ce5a0e-448c-4e25-9118-102049e60bf2","Type":"ContainerStarted","Data":"ea9b0005e74c29ced06459bc2c2b787d24ebf09c4f7bb6ce48af7a1d0333300c"} Apr 23 17:55:02.173549 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.173495 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" event={"ID":"ae6fbd87-39e0-4719-affc-7aa3893a0f78","Type":"ContainerStarted","Data":"8562e2ec6926acfcd09d5c43ec2f5b1184be8e63d55381af470bd061aa96fd77"} Apr 23 17:55:02.176458 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.176397 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerStarted","Data":"7d5bef2fdc6e23e61bca04b2499c22ac3eb0f91de90b70ec718709adf0426869"} Apr 23 17:55:02.183429 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.183383 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gj4fr" event={"ID":"4fe60531-7273-4ea9-b33c-0e4c909f6075","Type":"ContainerStarted","Data":"e1632f61ff375006f6dfd7bb7832ebcc5ab9852ebaab6e07ce0e5d3afe761268"} Apr 23 17:55:02.189438 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.189396 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rtfpq" event={"ID":"31e0ba61-35be-422e-9b2b-b9c49a736615","Type":"ContainerStarted","Data":"18cf610f1f0176ba56a6c694e96d621938ed6df1aa1501d502fc40a8fdd03eca"} Apr 23 17:55:02.191446 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.191393 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q7b2g" event={"ID":"ff40def5-69f9-4dad-aa0e-540f9bc631f0","Type":"ContainerStarted","Data":"5a35f04c7ae8d3d4a8eea8d1c003645226fa7d24ba174cf278463bc997c549d4"} Apr 23 17:55:02.508354 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.508312 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:02.508540 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:02.508488 2565 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:02.508609 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:02.508554 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs podName:ab8d91e1-def0-4ec9-93d5-476175cef3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:04.508534026 +0000 UTC m=+168.296273285 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs") pod "network-metrics-daemon-4fq2j" (UID: "ab8d91e1-def0-4ec9-93d5-476175cef3cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:02.610035 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.609332 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cwt\" (UniqueName: \"kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt\") pod \"network-check-target-7n2tw\" (UID: \"133d891d-d4a9-44a1-ac6f-7a963f5616fe\") " pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:02.610035 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:02.609535 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:02.610035 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:02.609554 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:02.610035 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:02.609568 2565 projected.go:194] Error preparing data for projected volume kube-api-access-p2cwt for pod openshift-network-diagnostics/network-check-target-7n2tw: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:02.610035 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:02.609636 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt podName:133d891d-d4a9-44a1-ac6f-7a963f5616fe nodeName:}" failed. No retries permitted until 2026-04-23 17:55:04.609618443 +0000 UTC m=+168.397357688 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p2cwt" (UniqueName: "kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt") pod "network-check-target-7n2tw" (UID: "133d891d-d4a9-44a1-ac6f-7a963f5616fe") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:02.911613 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.911215 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:02.911613 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:02.911337 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:02.912319 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:02.912018 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:02.912319 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:02.912181 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:04.526258 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:04.526181 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:04.526721 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:04.526384 2565 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:04.526721 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:04.526452 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs podName:ab8d91e1-def0-4ec9-93d5-476175cef3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:08.526433062 +0000 UTC m=+172.314172313 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs") pod "network-metrics-daemon-4fq2j" (UID: "ab8d91e1-def0-4ec9-93d5-476175cef3cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:04.626833 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:04.626793 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cwt\" (UniqueName: \"kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt\") pod \"network-check-target-7n2tw\" (UID: \"133d891d-d4a9-44a1-ac6f-7a963f5616fe\") " pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:04.627074 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:04.626972 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:04.627074 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:04.627001 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:04.627074 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:04.627016 2565 projected.go:194] Error preparing data for projected volume kube-api-access-p2cwt for pod openshift-network-diagnostics/network-check-target-7n2tw: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:04.627243 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:04.627084 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt podName:133d891d-d4a9-44a1-ac6f-7a963f5616fe nodeName:}" failed. No retries permitted until 2026-04-23 17:55:08.62706291 +0000 UTC m=+172.414802153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p2cwt" (UniqueName: "kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt") pod "network-check-target-7n2tw" (UID: "133d891d-d4a9-44a1-ac6f-7a963f5616fe") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:04.908723 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:04.908099 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:04.908723 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:04.908248 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:04.909207 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:04.909045 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:04.909207 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:04.909167 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:06.874832 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:06.874790 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:06.908729 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:06.908034 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:06.908729 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:06.908166 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:06.908729 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:06.908586 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:06.908729 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:06.908689 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:08.559499 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:08.559390 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:08.559987 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:08.559572 2565 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:08.559987 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:08.559652 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs podName:ab8d91e1-def0-4ec9-93d5-476175cef3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:16.559631705 +0000 UTC m=+180.347370958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs") pod "network-metrics-daemon-4fq2j" (UID: "ab8d91e1-def0-4ec9-93d5-476175cef3cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:08.660139 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:08.660038 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cwt\" (UniqueName: \"kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt\") pod \"network-check-target-7n2tw\" (UID: \"133d891d-d4a9-44a1-ac6f-7a963f5616fe\") " pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:08.660314 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:08.660282 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:08.660314 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:08.660311 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:08.660431 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:08.660325 2565 projected.go:194] Error preparing data for projected volume kube-api-access-p2cwt for pod openshift-network-diagnostics/network-check-target-7n2tw: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:08.660431 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:08.660384 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt podName:133d891d-d4a9-44a1-ac6f-7a963f5616fe nodeName:}" failed. No retries permitted until 2026-04-23 17:55:16.660365071 +0000 UTC m=+180.448104322 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p2cwt" (UniqueName: "kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt") pod "network-check-target-7n2tw" (UID: "133d891d-d4a9-44a1-ac6f-7a963f5616fe") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:08.907249 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:08.907154 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:08.907249 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:08.907154 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:08.907503 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:08.907301 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:08.907503 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:08.907364 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:10.907280 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:10.907245 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:10.907736 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:10.907245 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:10.907736 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:10.907377 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:10.907736 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:10.907480 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:11.876101 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:11.876065 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:12.907296 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:12.907261 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:12.907738 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:12.907426 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:12.907738 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:12.907474 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:12.907738 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:12.907600 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:14.907394 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:14.907359 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:14.907839 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:14.907362 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:14.907839 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:14.907494 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:14.907839 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:14.907561 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:14.908034 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:14.907901 2565 scope.go:117] "RemoveContainer" containerID="51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3" Apr 23 17:55:14.908159 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:14.908129 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_openshift-machine-config-operator(7e3c596a27faede1f97b6bb0972592f6)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podUID="7e3c596a27faede1f97b6bb0972592f6" Apr 23 17:55:16.614759 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:16.614705 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:16.615171 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:16.614830 2565 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:16.615171 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:16.614896 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs podName:ab8d91e1-def0-4ec9-93d5-476175cef3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:32.61487861 +0000 UTC m=+196.402617851 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs") pod "network-metrics-daemon-4fq2j" (UID: "ab8d91e1-def0-4ec9-93d5-476175cef3cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:16.715840 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:16.715796 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cwt\" (UniqueName: \"kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt\") pod \"network-check-target-7n2tw\" (UID: \"133d891d-d4a9-44a1-ac6f-7a963f5616fe\") " pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:16.716039 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:16.716009 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:16.716039 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:16.716029 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:16.716039 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:16.716040 2565 projected.go:194] Error preparing data for projected volume kube-api-access-p2cwt for pod openshift-network-diagnostics/network-check-target-7n2tw: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:16.716186 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:16.716087 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt podName:133d891d-d4a9-44a1-ac6f-7a963f5616fe nodeName:}" failed. No retries permitted until 2026-04-23 17:55:32.716074171 +0000 UTC m=+196.503813408 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-p2cwt" (UniqueName: "kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt") pod "network-check-target-7n2tw" (UID: "133d891d-d4a9-44a1-ac6f-7a963f5616fe") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:16.876548 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:16.876472 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:16.908348 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:16.908313 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:16.908509 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:16.908429 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:16.908569 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:16.908498 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:16.908569 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:16.908523 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:18.228967 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.228913 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" event={"ID":"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5","Type":"ContainerStarted","Data":"4d25f7c30bc0a627eebc0e323f01b339b12e4e6f452a65c67e636f7a975d0b18"} Apr 23 17:55:18.229367 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.228971 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" event={"ID":"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5","Type":"ContainerStarted","Data":"024b5fc8231466c31ee196af5dfe7b1307811b6f08163a722263c88435940c51"} Apr 23 17:55:18.230214 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.230187 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-j46jf" event={"ID":"31ce5a0e-448c-4e25-9118-102049e60bf2","Type":"ContainerStarted","Data":"c889875045f3d79e14f81eb5a427748e82f59c68a3415ec882890ae8a8d91c12"} Apr 23 17:55:18.231474 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.231453 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" event={"ID":"ae6fbd87-39e0-4719-affc-7aa3893a0f78","Type":"ContainerStarted","Data":"47bf6b5831f9e5180a9a779799ac13d6c403d10af95d2f4014dcbdf9e98d6096"} Apr 23 17:55:18.232729 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.232707 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerStarted","Data":"a84b2d789f81a58f603ef83828794ae9337fe8fb0482857fe02176ce0010a407"} Apr 23 17:55:18.233863 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.233825 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gj4fr" event={"ID":"4fe60531-7273-4ea9-b33c-0e4c909f6075","Type":"ContainerStarted","Data":"d9ca6cfa92022a29e7ff8af67f4291986c2e63fb5f18b08743094d9a2b181b2a"} Apr 23 17:55:18.235287 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.235230 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rtfpq" event={"ID":"31e0ba61-35be-422e-9b2b-b9c49a736615","Type":"ContainerStarted","Data":"bda68fd2f711d46c2f87f0749d49af75918971687d82ecf13479d4a8b68c80e3"} Apr 23 17:55:18.236855 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.236833 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q7b2g" event={"ID":"ff40def5-69f9-4dad-aa0e-540f9bc631f0","Type":"ContainerStarted","Data":"f4e38f29276146e757269acdcb9bdea92070cbf22d78740831a870df3d761453"} Apr 23 17:55:18.238884 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.238860 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-brjd8" event={"ID":"2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb","Type":"ContainerStarted","Data":"ded18c4508f7ff8b58a75c428ccd7a3b584cee7b494912606d8bbd82b720e524"} Apr 23 17:55:18.254714 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.254563 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-j46jf" podStartSLOduration=46.508565174 podStartE2EDuration="1m3.254549852s" podCreationTimestamp="2026-04-23 17:54:15 +0000 UTC" firstStartedPulling="2026-04-23 17:55:01.198163247 +0000 UTC m=+164.985902488" lastFinishedPulling="2026-04-23 17:55:17.944147908 +0000 UTC m=+181.731887166" observedRunningTime="2026-04-23 17:55:18.253867587 +0000 UTC m=+182.041606847" watchObservedRunningTime="2026-04-23 17:55:18.254549852 +0000 UTC m=+182.042289112" Apr 23 17:55:18.302902 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.302826 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-q7b2g" podStartSLOduration=52.115602725 podStartE2EDuration="1m4.302812713s" podCreationTimestamp="2026-04-23 17:54:14 +0000 UTC" firstStartedPulling="2026-04-23 17:55:01.164276653 +0000 UTC m=+164.952015895" lastFinishedPulling="2026-04-23 17:55:13.351486631 +0000 UTC m=+177.139225883" observedRunningTime="2026-04-23 17:55:18.275266606 +0000 UTC m=+182.063005864" watchObservedRunningTime="2026-04-23 17:55:18.302812713 +0000 UTC m=+182.090551973" Apr 23 17:55:18.303033 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.302976 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rtfpq" podStartSLOduration=47.524463396 podStartE2EDuration="1m4.302950704s" podCreationTimestamp="2026-04-23 17:54:14 +0000 UTC" firstStartedPulling="2026-04-23 17:55:01.1700992 +0000 UTC m=+164.957838441" lastFinishedPulling="2026-04-23 17:55:17.948586506 +0000 UTC m=+181.736325749" observedRunningTime="2026-04-23 17:55:18.302376872 +0000 UTC m=+182.090116132" watchObservedRunningTime="2026-04-23 17:55:18.302950704 +0000 UTC m=+182.090690016" Apr 23 17:55:18.350190 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.350138 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gj4fr" podStartSLOduration=46.641433741 podStartE2EDuration="1m3.350117674s" podCreationTimestamp="2026-04-23 17:54:15 +0000 UTC" firstStartedPulling="2026-04-23 17:55:01.17554657 +0000 UTC m=+164.963285809" lastFinishedPulling="2026-04-23 17:55:17.88423049 +0000 UTC m=+181.671969742" observedRunningTime="2026-04-23 17:55:18.35007362 +0000 UTC m=+182.137812891" watchObservedRunningTime="2026-04-23 17:55:18.350117674 +0000 UTC m=+182.137856935" Apr 23 17:55:18.907398 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.907363 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:18.907538 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:18.907487 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:18.907591 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:18.907545 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:18.907677 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:18.907658 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:19.042774 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.042751 2565 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 23 17:55:19.242085 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.242050 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" event={"ID":"ae6fbd87-39e0-4719-affc-7aa3893a0f78","Type":"ContainerStarted","Data":"f73a6025f48060bd472a098c13f6555bcbea8b91701fccf87a6289a4b9e321b8"} Apr 23 17:55:19.243531 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.243374 2565 generic.go:358] "Generic (PLEG): container finished" podID="e07a9b3c-f646-4dfc-bbb1-523478399c03" containerID="a84b2d789f81a58f603ef83828794ae9337fe8fb0482857fe02176ce0010a407" exitCode=0 Apr 23 17:55:19.243623 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.243451 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerDied","Data":"a84b2d789f81a58f603ef83828794ae9337fe8fb0482857fe02176ce0010a407"} Apr 23 17:55:19.246385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.246261 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" event={"ID":"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5","Type":"ContainerStarted","Data":"5271c7b810079024045e7d56ad046a6b2fa86fea4019290f0937c92ad7fbcf21"} Apr 23 17:55:19.246385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.246287 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" event={"ID":"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5","Type":"ContainerStarted","Data":"ec9f72393c4760324841d9632ba7b1255f3bf2377edffb79e8600eaa7a04438b"} Apr 23 17:55:19.246385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.246302 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" event={"ID":"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5","Type":"ContainerStarted","Data":"030c2dac55f7c339296e86026ec9f23ad01971dc32a7ef95688cc92b85d6a827"} Apr 23 17:55:19.246385 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.246314 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" event={"ID":"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5","Type":"ContainerStarted","Data":"5874dbb278c09a42a9e079a8eab873acf0a1d463a14c05e82bdd582c4bb8fc66"} Apr 23 17:55:19.278038 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.277933 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-brjd8" podStartSLOduration=47.520207641 podStartE2EDuration="1m4.277919612s" podCreationTimestamp="2026-04-23 17:54:15 +0000 UTC" firstStartedPulling="2026-04-23 17:55:01.186442296 +0000 UTC m=+164.974181535" lastFinishedPulling="2026-04-23 17:55:17.944154253 +0000 UTC m=+181.731893506" observedRunningTime="2026-04-23 17:55:18.373100127 +0000 UTC m=+182.160839388" watchObservedRunningTime="2026-04-23 17:55:19.277919612 +0000 UTC m=+183.065658871" Apr 23 17:55:19.948331 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.948120 2565 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-23T17:55:19.042769853Z","UUID":"a9231650-59aa-40ec-823e-d6ce3469dc7f","Handler":null,"Name":"","Endpoint":""} Apr 23 17:55:19.952376 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.952350 2565 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 23 17:55:19.952376 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:19.952383 2565 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 23 17:55:20.249254 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:20.249215 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-nhffk" event={"ID":"0174e24b-fc3a-4f0a-8388-9b31e5a92647","Type":"ContainerStarted","Data":"6a0bc6bfee9bbc8cc7bee554e3420c7c76344ad1a39946ab41796d1201ebfbf7"} Apr 23 17:55:20.250974 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:20.250937 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" event={"ID":"ae6fbd87-39e0-4719-affc-7aa3893a0f78","Type":"ContainerStarted","Data":"b77f8c17f445363da6fe0107c7351c1490ee6577fa6263c61150915b3afdf25f"} Apr 23 17:55:20.269723 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:20.269683 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-nhffk" podStartSLOduration=48.589241335 podStartE2EDuration="1m5.269671738s" podCreationTimestamp="2026-04-23 17:54:15 +0000 UTC" firstStartedPulling="2026-04-23 17:55:01.203805553 +0000 UTC m=+164.991544791" lastFinishedPulling="2026-04-23 17:55:17.884235954 +0000 UTC m=+181.671975194" observedRunningTime="2026-04-23 17:55:20.269544148 +0000 UTC m=+184.057283407" watchObservedRunningTime="2026-04-23 17:55:20.269671738 +0000 UTC m=+184.057410998" Apr 23 17:55:20.294829 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:20.294782 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nxfcd" podStartSLOduration=46.668320666 podStartE2EDuration="1m5.294769845s" podCreationTimestamp="2026-04-23 17:54:15 +0000 UTC" firstStartedPulling="2026-04-23 17:55:01.191063518 +0000 UTC m=+164.978802767" lastFinishedPulling="2026-04-23 17:55:19.817512695 +0000 UTC m=+183.605251946" observedRunningTime="2026-04-23 17:55:20.294610281 +0000 UTC m=+184.082349541" watchObservedRunningTime="2026-04-23 17:55:20.294769845 +0000 UTC m=+184.082509083" Apr 23 17:55:20.909993 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:20.909899 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:20.909993 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:20.909923 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:20.910195 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:20.910038 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:20.910195 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:20.910150 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:21.178224 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:21.178082 2565 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:21.178794 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:21.178768 2565 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:21.256435 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:21.256395 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" event={"ID":"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5","Type":"ContainerStarted","Data":"87eab755dff9ccff9b327caa4211a76c6839afee275a488e99aeabe48fb3d9d1"} Apr 23 17:55:21.256971 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:21.256933 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:21.257192 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:21.257172 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-brjd8" Apr 23 17:55:21.877242 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:21.877201 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:22.907277 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:22.907191 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:22.907277 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:22.907232 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:22.907724 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:22.907347 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:22.907724 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:22.907476 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:24.264450 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:24.264200 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" event={"ID":"f6bc6d36-30b4-4f0e-8a4e-46b934b798a5","Type":"ContainerStarted","Data":"0bb884d40f325b4b5c073fb27e25088e50579d389e0e0587a655fd34ff2599fe"} Apr 23 17:55:24.265066 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:24.264467 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:24.265835 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:24.265810 2565 generic.go:358] "Generic (PLEG): container finished" podID="e07a9b3c-f646-4dfc-bbb1-523478399c03" containerID="9bc25136b2764b15a135b00cd31ba591d71e7b7d88f4e19f2821d8b1e71b2808" exitCode=0 Apr 23 17:55:24.265944 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:24.265857 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerDied","Data":"9bc25136b2764b15a135b00cd31ba591d71e7b7d88f4e19f2821d8b1e71b2808"} Apr 23 17:55:24.278572 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:24.278549 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:24.389684 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:24.389636 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" podStartSLOduration=52.577634526 podStartE2EDuration="1m9.389620752s" podCreationTimestamp="2026-04-23 17:54:15 +0000 UTC" firstStartedPulling="2026-04-23 17:55:01.208678279 +0000 UTC m=+164.996417517" lastFinishedPulling="2026-04-23 17:55:18.020664491 +0000 UTC m=+181.808403743" observedRunningTime="2026-04-23 17:55:24.318221479 +0000 UTC m=+188.105960746" watchObservedRunningTime="2026-04-23 17:55:24.389620752 +0000 UTC m=+188.177359994" Apr 23 17:55:24.907432 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:24.907398 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:24.907660 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:24.907538 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:24.907660 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:24.907583 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:24.907737 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:24.907672 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:25.268661 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:25.268630 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:25.268661 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:25.268664 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:25.281940 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:25.281915 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:26.271966 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:26.271914 2565 generic.go:358] "Generic (PLEG): container finished" podID="e07a9b3c-f646-4dfc-bbb1-523478399c03" containerID="529867f8e51a897494e438c79d4dcaf05c8b2aad63807af48953c9c1682f0740" exitCode=0 Apr 23 17:55:26.272390 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:26.271986 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerDied","Data":"529867f8e51a897494e438c79d4dcaf05c8b2aad63807af48953c9c1682f0740"} Apr 23 17:55:26.878540 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:26.878503 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:26.907176 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:26.907150 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:26.907320 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:26.907254 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:26.907320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:26.907310 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:26.907421 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:26.907402 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:27.275817 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:27.275784 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerStarted","Data":"305dd4fbb9188fa748b9f9c3af7ac1d2c5d45b076d92e62879110a006a5879de"} Apr 23 17:55:28.280058 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:28.280021 2565 generic.go:358] "Generic (PLEG): container finished" podID="e07a9b3c-f646-4dfc-bbb1-523478399c03" containerID="305dd4fbb9188fa748b9f9c3af7ac1d2c5d45b076d92e62879110a006a5879de" exitCode=0 Apr 23 17:55:28.280518 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:28.280087 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerDied","Data":"305dd4fbb9188fa748b9f9c3af7ac1d2c5d45b076d92e62879110a006a5879de"} Apr 23 17:55:28.907274 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:28.907234 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:28.907487 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:28.907465 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:28.907587 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:28.907547 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:28.907698 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:28.907631 2565 scope.go:117] "RemoveContainer" containerID="51fa454c818a950384c7dc0033163202b3f75ec15ab20602270aaab54e7ba3f3" Apr 23 17:55:28.907698 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:28.907650 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:29.283974 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:29.283937 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 17:55:29.284407 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:29.284310 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" event={"ID":"7e3c596a27faede1f97b6bb0972592f6","Type":"ContainerStarted","Data":"fd8ea5d9d833c7ed5fb6d4e8d51e115d7ffde1ad4fa0ca66dbaa794bdf5060c3"} Apr 23 17:55:29.302444 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:29.302405 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal" podStartSLOduration=29.302391996 podStartE2EDuration="29.302391996s" podCreationTimestamp="2026-04-23 17:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:29.301951744 +0000 UTC m=+193.089691004" watchObservedRunningTime="2026-04-23 17:55:29.302391996 +0000 UTC m=+193.090131255" Apr 23 17:55:30.907215 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:30.907173 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:30.907695 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:30.907173 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:30.907695 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:30.907312 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:30.907695 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:30.907429 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:31.879685 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:31.879640 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:32.625044 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:32.625006 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:32.625494 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:32.625149 2565 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:32.625494 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:32.625212 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs podName:ab8d91e1-def0-4ec9-93d5-476175cef3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.625192721 +0000 UTC m=+228.412931969 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs") pod "network-metrics-daemon-4fq2j" (UID: "ab8d91e1-def0-4ec9-93d5-476175cef3cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:32.725443 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:32.725409 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cwt\" (UniqueName: \"kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt\") pod \"network-check-target-7n2tw\" (UID: \"133d891d-d4a9-44a1-ac6f-7a963f5616fe\") " pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:32.725626 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:32.725604 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:32.725673 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:32.725633 2565 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:32.725673 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:32.725647 2565 projected.go:194] Error preparing data for projected volume kube-api-access-p2cwt for pod openshift-network-diagnostics/network-check-target-7n2tw: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:32.725734 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:32.725711 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt podName:133d891d-d4a9-44a1-ac6f-7a963f5616fe nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.725692714 +0000 UTC m=+228.513431969 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-p2cwt" (UniqueName: "kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt") pod "network-check-target-7n2tw" (UID: "133d891d-d4a9-44a1-ac6f-7a963f5616fe") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:32.907068 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:32.906977 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:32.907068 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:32.907018 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:32.907272 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:32.907107 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:32.907333 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:32.907272 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:34.296578 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:34.296304 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerStarted","Data":"55501992fdd80dfb25793f1981f9ccb4dcccad315e8267942602414f4c4f9dd7"} Apr 23 17:55:34.907213 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:34.907177 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:34.907408 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:34.907177 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:34.907408 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:34.907318 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:34.907408 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:34.907348 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:35.300586 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:35.300556 2565 generic.go:358] "Generic (PLEG): container finished" podID="e07a9b3c-f646-4dfc-bbb1-523478399c03" containerID="55501992fdd80dfb25793f1981f9ccb4dcccad315e8267942602414f4c4f9dd7" exitCode=0 Apr 23 17:55:35.300991 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:35.300621 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerDied","Data":"55501992fdd80dfb25793f1981f9ccb4dcccad315e8267942602414f4c4f9dd7"} Apr 23 17:55:36.207497 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:36.207462 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-7n2tw"] Apr 23 17:55:36.207876 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:36.207596 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:36.207876 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:36.207684 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:36.210392 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:36.210366 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4fq2j"] Apr 23 17:55:36.210511 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:36.210469 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:36.210575 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:36.210558 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:36.305123 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:36.305087 2565 generic.go:358] "Generic (PLEG): container finished" podID="e07a9b3c-f646-4dfc-bbb1-523478399c03" containerID="28d6acd2226eac4a1cb0c74b1ae124902c64e3c8494bb25629322a9f37188fdc" exitCode=0 Apr 23 17:55:36.305510 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:36.305136 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerDied","Data":"28d6acd2226eac4a1cb0c74b1ae124902c64e3c8494bb25629322a9f37188fdc"} Apr 23 17:55:36.880864 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:36.880784 2565 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:37.309999 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:37.309797 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" event={"ID":"e07a9b3c-f646-4dfc-bbb1-523478399c03","Type":"ContainerStarted","Data":"969e78c151751e2b9641475c82a5d61f599fe643bc01e3f0dbb012883a3f605e"} Apr 23 17:55:37.341103 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:37.341052 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gw5sh" podStartSLOduration=49.46064557 podStartE2EDuration="1m22.341038476s" podCreationTimestamp="2026-04-23 17:54:15 +0000 UTC" firstStartedPulling="2026-04-23 17:55:01.181096963 +0000 UTC m=+164.968836201" lastFinishedPulling="2026-04-23 17:55:34.061489865 +0000 UTC m=+197.849229107" observedRunningTime="2026-04-23 17:55:37.339842258 +0000 UTC m=+201.127581515" watchObservedRunningTime="2026-04-23 17:55:37.341038476 +0000 UTC m=+201.128777784" Apr 23 17:55:37.906813 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:37.906781 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:37.906813 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:37.906810 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:37.907022 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:37.906897 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:37.907063 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:37.907028 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:39.907266 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:39.907229 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:39.907708 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:39.907229 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:39.907708 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:39.907331 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7n2tw" podUID="133d891d-d4a9-44a1-ac6f-7a963f5616fe" Apr 23 17:55:39.907708 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:39.907482 2565 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4fq2j" podUID="ab8d91e1-def0-4ec9-93d5-476175cef3cd" Apr 23 17:55:41.906649 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:41.906613 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:55:41.907187 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:41.906622 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:55:41.911754 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:41.911731 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:55:41.911880 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:41.911731 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:55:41.911880 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:41.911731 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-q5nnb\"" Apr 23 17:55:41.911880 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:41.911731 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:55:41.911880 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:41.911739 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-r6xp2\"" Apr 23 17:55:43.652171 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.652139 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-kz8mr"] Apr 23 17:55:43.656140 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.656116 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.659747 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.659725 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 23 17:55:43.660099 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.660081 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 23 17:55:43.660163 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.660081 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 23 17:55:43.661027 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.660996 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 23 17:55:43.661225 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.661187 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 23 17:55:43.661346 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.661250 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-ctqbm\"" Apr 23 17:55:43.661346 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.661253 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 23 17:55:43.799740 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.799706 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv9t9\" (UniqueName: \"kubernetes.io/projected/38a98591-9ab8-4f58-81e9-3bc1f46d0756-kube-api-access-jv9t9\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.799740 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.799744 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/38a98591-9ab8-4f58-81e9-3bc1f46d0756-metrics-client-ca\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.800009 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.799766 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.800009 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.799787 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/38a98591-9ab8-4f58-81e9-3bc1f46d0756-sys\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.800009 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.799804 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-tls\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.800009 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.799843 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-accelerators-collector-config\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.800009 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.799904 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/38a98591-9ab8-4f58-81e9-3bc1f46d0756-root\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.800009 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.799942 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-wtmp\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.800009 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.799981 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-textfile\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.900811 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.900781 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-textfile\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.900848 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jv9t9\" (UniqueName: \"kubernetes.io/projected/38a98591-9ab8-4f58-81e9-3bc1f46d0756-kube-api-access-jv9t9\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.900876 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/38a98591-9ab8-4f58-81e9-3bc1f46d0756-metrics-client-ca\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.900902 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.900934 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/38a98591-9ab8-4f58-81e9-3bc1f46d0756-sys\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.900972 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-tls\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.900997 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-accelerators-collector-config\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901339 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.901025 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/38a98591-9ab8-4f58-81e9-3bc1f46d0756-root\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901339 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.901032 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/38a98591-9ab8-4f58-81e9-3bc1f46d0756-sys\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901339 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.901057 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-wtmp\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901339 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:43.901145 2565 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 23 17:55:43.901339 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.901195 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-textfile\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901339 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.901148 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/38a98591-9ab8-4f58-81e9-3bc1f46d0756-root\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901339 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:55:43.901221 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-tls podName:38a98591-9ab8-4f58-81e9-3bc1f46d0756 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:44.401199933 +0000 UTC m=+208.188939191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-tls") pod "node-exporter-kz8mr" (UID: "38a98591-9ab8-4f58-81e9-3bc1f46d0756") : secret "node-exporter-tls" not found Apr 23 17:55:43.901339 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.901254 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-wtmp\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901842 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.901825 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/38a98591-9ab8-4f58-81e9-3bc1f46d0756-metrics-client-ca\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.901936 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.901913 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-accelerators-collector-config\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.905023 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.904950 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:43.912096 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:43.912075 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv9t9\" (UniqueName: \"kubernetes.io/projected/38a98591-9ab8-4f58-81e9-3bc1f46d0756-kube-api-access-jv9t9\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:44.405638 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:44.405596 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-tls\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:44.407941 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:44.407910 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/38a98591-9ab8-4f58-81e9-3bc1f46d0756-node-exporter-tls\") pod \"node-exporter-kz8mr\" (UID: \"38a98591-9ab8-4f58-81e9-3bc1f46d0756\") " pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:44.564783 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:44.564755 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-kz8mr" Apr 23 17:55:44.574545 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:44.574512 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a98591_9ab8_4f58_81e9_3bc1f46d0756.slice/crio-d58d0c9b9968cbeb41c2aff231870973761c3685dcd0bd3fe60a642a649e1ca5 WatchSource:0}: Error finding container d58d0c9b9968cbeb41c2aff231870973761c3685dcd0bd3fe60a642a649e1ca5: Status 404 returned error can't find the container with id d58d0c9b9968cbeb41c2aff231870973761c3685dcd0bd3fe60a642a649e1ca5 Apr 23 17:55:45.325495 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:45.325463 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-kz8mr" event={"ID":"38a98591-9ab8-4f58-81e9-3bc1f46d0756","Type":"ContainerStarted","Data":"d58d0c9b9968cbeb41c2aff231870973761c3685dcd0bd3fe60a642a649e1ca5"} Apr 23 17:55:46.329123 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:46.328936 2565 generic.go:358] "Generic (PLEG): container finished" podID="38a98591-9ab8-4f58-81e9-3bc1f46d0756" containerID="8a234acc68947f1eb371d64f0946e6a74304da27a252cb6fd28e50f5e2297e20" exitCode=0 Apr 23 17:55:46.329123 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:46.329020 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-kz8mr" event={"ID":"38a98591-9ab8-4f58-81e9-3bc1f46d0756","Type":"ContainerDied","Data":"8a234acc68947f1eb371d64f0946e6a74304da27a252cb6fd28e50f5e2297e20"} Apr 23 17:55:47.311824 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.311327 2565 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-131-177.ec2.internal" event="NodeReady" Apr 23 17:55:47.332901 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.332864 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-kz8mr" event={"ID":"38a98591-9ab8-4f58-81e9-3bc1f46d0756","Type":"ContainerStarted","Data":"04a081bd491fb5859c5d81173d3b1f98518ae330fb3ad9fbc7ff16fbe5669817"} Apr 23 17:55:47.333267 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.332906 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-kz8mr" event={"ID":"38a98591-9ab8-4f58-81e9-3bc1f46d0756","Type":"ContainerStarted","Data":"e8e2a9522d86b8e86cdb7e7c64143943ab510da4c4aa9cf7e35d7a4b673012e8"} Apr 23 17:55:47.360238 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.360188 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-kz8mr" podStartSLOduration=3.661061142 podStartE2EDuration="4.360173167s" podCreationTimestamp="2026-04-23 17:55:43 +0000 UTC" firstStartedPulling="2026-04-23 17:55:44.576392746 +0000 UTC m=+208.364131984" lastFinishedPulling="2026-04-23 17:55:45.275504768 +0000 UTC m=+209.063244009" observedRunningTime="2026-04-23 17:55:47.359860407 +0000 UTC m=+211.147599689" watchObservedRunningTime="2026-04-23 17:55:47.360173167 +0000 UTC m=+211.147912428" Apr 23 17:55:47.361949 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.361917 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-54dc5c8cc9-q4z6v"] Apr 23 17:55:47.383525 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.383502 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-54dc5c8cc9-q4z6v"] Apr 23 17:55:47.383654 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.383606 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.383977 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.383940 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-jf562"] Apr 23 17:55:47.386129 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.386109 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 23 17:55:47.386232 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.386129 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 23 17:55:47.386439 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.386420 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-l8dx7\"" Apr 23 17:55:47.386538 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.386462 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 23 17:55:47.393449 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.393432 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 23 17:55:47.395471 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.395451 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-wvtqh"] Apr 23 17:55:47.395614 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.395599 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.399509 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.399489 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 23 17:55:47.399631 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.399524 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-9q5jf\"" Apr 23 17:55:47.399631 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.399619 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 23 17:55:47.399751 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.399690 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 23 17:55:47.400044 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.400027 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 23 17:55:47.407565 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.407545 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-jf562"] Apr 23 17:55:47.407641 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.407634 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wvtqh" Apr 23 17:55:47.410599 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.410585 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 23 17:55:47.411598 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.411572 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 23 17:55:47.411598 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.411596 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 23 17:55:47.418124 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.418107 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-xzdf6\"" Apr 23 17:55:47.430512 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.430494 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wvtqh"] Apr 23 17:55:47.485863 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.485840 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9vf42"] Apr 23 17:55:47.519525 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.519498 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9vf42"] Apr 23 17:55:47.519660 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.519619 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.524834 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.524816 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 23 17:55:47.525006 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.524952 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 23 17:55:47.525090 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525071 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502c5299-00b3-4f96-81c6-b7d544cfc79a-trusted-ca\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.525150 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525110 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/502c5299-00b3-4f96-81c6-b7d544cfc79a-bound-sa-token\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.525208 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525155 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxtf6\" (UniqueName: \"kubernetes.io/projected/8d651935-ade8-4ad7-91b8-d50bd718e6d8-kube-api-access-mxtf6\") pod \"ingress-canary-wvtqh\" (UID: \"8d651935-ade8-4ad7-91b8-d50bd718e6d8\") " pod="openshift-ingress-canary/ingress-canary-wvtqh" Apr 23 17:55:47.525251 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525207 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9qs\" (UniqueName: \"kubernetes.io/projected/502c5299-00b3-4f96-81c6-b7d544cfc79a-kube-api-access-jn9qs\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.525302 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525256 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/502c5299-00b3-4f96-81c6-b7d544cfc79a-image-registry-private-configuration\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.525302 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525285 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/502c5299-00b3-4f96-81c6-b7d544cfc79a-installation-pull-secrets\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.525494 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525320 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/160b686c-d212-4373-af80-30f03f159c0b-crio-socket\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.525494 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525345 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/160b686c-d212-4373-af80-30f03f159c0b-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.525494 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525375 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-jll6l\"" Apr 23 17:55:47.525494 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525408 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kph8w\" (UniqueName: \"kubernetes.io/projected/160b686c-d212-4373-af80-30f03f159c0b-kube-api-access-kph8w\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.525494 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525431 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d651935-ade8-4ad7-91b8-d50bd718e6d8-cert\") pod \"ingress-canary-wvtqh\" (UID: \"8d651935-ade8-4ad7-91b8-d50bd718e6d8\") " pod="openshift-ingress-canary/ingress-canary-wvtqh" Apr 23 17:55:47.525494 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525473 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/502c5299-00b3-4f96-81c6-b7d544cfc79a-registry-certificates\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.525788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525510 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/160b686c-d212-4373-af80-30f03f159c0b-data-volume\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.525788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525534 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/160b686c-d212-4373-af80-30f03f159c0b-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.525788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525604 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/502c5299-00b3-4f96-81c6-b7d544cfc79a-registry-tls\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.525788 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.525629 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/502c5299-00b3-4f96-81c6-b7d544cfc79a-ca-trust-extracted\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.626524 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626448 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/160b686c-d212-4373-af80-30f03f159c0b-crio-socket\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.626524 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626478 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/160b686c-d212-4373-af80-30f03f159c0b-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.626524 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626515 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kph8w\" (UniqueName: \"kubernetes.io/projected/160b686c-d212-4373-af80-30f03f159c0b-kube-api-access-kph8w\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.626785 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626533 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d651935-ade8-4ad7-91b8-d50bd718e6d8-cert\") pod \"ingress-canary-wvtqh\" (UID: \"8d651935-ade8-4ad7-91b8-d50bd718e6d8\") " pod="openshift-ingress-canary/ingress-canary-wvtqh" Apr 23 17:55:47.626785 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626555 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be6942ff-e805-4834-ad15-79b60bde1296-config-volume\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.626785 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626603 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrd46\" (UniqueName: \"kubernetes.io/projected/be6942ff-e805-4834-ad15-79b60bde1296-kube-api-access-xrd46\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.626785 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626648 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/502c5299-00b3-4f96-81c6-b7d544cfc79a-registry-certificates\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.626785 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626670 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/160b686c-d212-4373-af80-30f03f159c0b-crio-socket\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.626785 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626686 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/160b686c-d212-4373-af80-30f03f159c0b-data-volume\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.626785 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626714 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/160b686c-d212-4373-af80-30f03f159c0b-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.626785 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626739 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/502c5299-00b3-4f96-81c6-b7d544cfc79a-registry-tls\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.626785 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626768 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/502c5299-00b3-4f96-81c6-b7d544cfc79a-ca-trust-extracted\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.627242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626798 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502c5299-00b3-4f96-81c6-b7d544cfc79a-trusted-ca\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.627242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626828 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/502c5299-00b3-4f96-81c6-b7d544cfc79a-bound-sa-token\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.627242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626872 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxtf6\" (UniqueName: \"kubernetes.io/projected/8d651935-ade8-4ad7-91b8-d50bd718e6d8-kube-api-access-mxtf6\") pod \"ingress-canary-wvtqh\" (UID: \"8d651935-ade8-4ad7-91b8-d50bd718e6d8\") " pod="openshift-ingress-canary/ingress-canary-wvtqh" Apr 23 17:55:47.627242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626898 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9qs\" (UniqueName: \"kubernetes.io/projected/502c5299-00b3-4f96-81c6-b7d544cfc79a-kube-api-access-jn9qs\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.627242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626940 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/502c5299-00b3-4f96-81c6-b7d544cfc79a-image-registry-private-configuration\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.627242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.626993 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/502c5299-00b3-4f96-81c6-b7d544cfc79a-installation-pull-secrets\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.627242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.627028 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be6942ff-e805-4834-ad15-79b60bde1296-metrics-tls\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.627242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.627036 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/160b686c-d212-4373-af80-30f03f159c0b-data-volume\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.627242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.627054 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be6942ff-e805-4834-ad15-79b60bde1296-tmp-dir\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.627242 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.627185 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/160b686c-d212-4373-af80-30f03f159c0b-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.628587 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.628560 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502c5299-00b3-4f96-81c6-b7d544cfc79a-trusted-ca\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.628985 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.628942 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/502c5299-00b3-4f96-81c6-b7d544cfc79a-ca-trust-extracted\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.630430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.630406 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/502c5299-00b3-4f96-81c6-b7d544cfc79a-installation-pull-secrets\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.630529 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.630414 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/502c5299-00b3-4f96-81c6-b7d544cfc79a-registry-tls\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.630787 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.630767 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/160b686c-d212-4373-af80-30f03f159c0b-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.630828 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.630772 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d651935-ade8-4ad7-91b8-d50bd718e6d8-cert\") pod \"ingress-canary-wvtqh\" (UID: \"8d651935-ade8-4ad7-91b8-d50bd718e6d8\") " pod="openshift-ingress-canary/ingress-canary-wvtqh" Apr 23 17:55:47.630911 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.630892 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/502c5299-00b3-4f96-81c6-b7d544cfc79a-image-registry-private-configuration\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.638001 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.637928 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/502c5299-00b3-4f96-81c6-b7d544cfc79a-registry-certificates\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.638729 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.638684 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kph8w\" (UniqueName: \"kubernetes.io/projected/160b686c-d212-4373-af80-30f03f159c0b-kube-api-access-kph8w\") pod \"insights-runtime-extractor-jf562\" (UID: \"160b686c-d212-4373-af80-30f03f159c0b\") " pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.639845 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.639822 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9qs\" (UniqueName: \"kubernetes.io/projected/502c5299-00b3-4f96-81c6-b7d544cfc79a-kube-api-access-jn9qs\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.640225 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.640202 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxtf6\" (UniqueName: \"kubernetes.io/projected/8d651935-ade8-4ad7-91b8-d50bd718e6d8-kube-api-access-mxtf6\") pod \"ingress-canary-wvtqh\" (UID: \"8d651935-ade8-4ad7-91b8-d50bd718e6d8\") " pod="openshift-ingress-canary/ingress-canary-wvtqh" Apr 23 17:55:47.640473 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.640455 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/502c5299-00b3-4f96-81c6-b7d544cfc79a-bound-sa-token\") pod \"image-registry-54dc5c8cc9-q4z6v\" (UID: \"502c5299-00b3-4f96-81c6-b7d544cfc79a\") " pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.693741 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.693719 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:47.704400 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.704379 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-jf562" Apr 23 17:55:47.715029 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.715009 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wvtqh" Apr 23 17:55:47.727808 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.727784 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be6942ff-e805-4834-ad15-79b60bde1296-metrics-tls\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.727937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.727813 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be6942ff-e805-4834-ad15-79b60bde1296-tmp-dir\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.727937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.727845 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be6942ff-e805-4834-ad15-79b60bde1296-config-volume\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.727937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.727860 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xrd46\" (UniqueName: \"kubernetes.io/projected/be6942ff-e805-4834-ad15-79b60bde1296-kube-api-access-xrd46\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.728227 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.728206 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be6942ff-e805-4834-ad15-79b60bde1296-tmp-dir\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.728608 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.728585 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be6942ff-e805-4834-ad15-79b60bde1296-config-volume\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.731408 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.730638 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be6942ff-e805-4834-ad15-79b60bde1296-metrics-tls\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.738897 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.738872 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrd46\" (UniqueName: \"kubernetes.io/projected/be6942ff-e805-4834-ad15-79b60bde1296-kube-api-access-xrd46\") pod \"dns-default-9vf42\" (UID: \"be6942ff-e805-4834-ad15-79b60bde1296\") " pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.828781 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.828721 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:47.859337 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.859182 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wvtqh"] Apr 23 17:55:47.874826 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.874387 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-jf562"] Apr 23 17:55:47.874976 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:47.874926 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod160b686c_d212_4373_af80_30f03f159c0b.slice/crio-a474f630ee47f54041b19f753ede4db01be1f8650d69066ae83c032c309ad3a6 WatchSource:0}: Error finding container a474f630ee47f54041b19f753ede4db01be1f8650d69066ae83c032c309ad3a6: Status 404 returned error can't find the container with id a474f630ee47f54041b19f753ede4db01be1f8650d69066ae83c032c309ad3a6 Apr 23 17:55:47.876225 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.876186 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-54dc5c8cc9-q4z6v"] Apr 23 17:55:47.880061 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:47.879823 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod502c5299_00b3_4f96_81c6_b7d544cfc79a.slice/crio-aca35591eace4fa48a8ce86bf35f8d1cd0ef6cc7ec75b6b8da5f782b6f17c8fc WatchSource:0}: Error finding container aca35591eace4fa48a8ce86bf35f8d1cd0ef6cc7ec75b6b8da5f782b6f17c8fc: Status 404 returned error can't find the container with id aca35591eace4fa48a8ce86bf35f8d1cd0ef6cc7ec75b6b8da5f782b6f17c8fc Apr 23 17:55:47.951065 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:47.951040 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9vf42"] Apr 23 17:55:47.955145 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:47.955123 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe6942ff_e805_4834_ad15_79b60bde1296.slice/crio-1f65bbb9970882e1412083255f7ba2bbe6a58164150fcec42800ef0cb630ebd3 WatchSource:0}: Error finding container 1f65bbb9970882e1412083255f7ba2bbe6a58164150fcec42800ef0cb630ebd3: Status 404 returned error can't find the container with id 1f65bbb9970882e1412083255f7ba2bbe6a58164150fcec42800ef0cb630ebd3 Apr 23 17:55:48.186396 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.186315 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-68b7b96464-lkmkn"] Apr 23 17:55:48.204343 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.204315 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68b7b96464-lkmkn"] Apr 23 17:55:48.204493 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.204441 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.207581 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.207546 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-server-audit-profiles\"" Apr 23 17:55:48.207721 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.207650 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kubelet-serving-ca-bundle\"" Apr 23 17:55:48.207721 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.207679 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-dockercfg-cphwf\"" Apr 23 17:55:48.207721 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.207694 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-8qej0ft9d6gsb\"" Apr 23 17:55:48.207898 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.207795 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-client-certs\"" Apr 23 17:55:48.207898 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.207803 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-tls\"" Apr 23 17:55:48.332809 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.332772 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/176de401-a14e-4368-9fe1-d71e0e83bd52-metrics-server-audit-profiles\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.333002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.332843 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/176de401-a14e-4368-9fe1-d71e0e83bd52-client-ca-bundle\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.333002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.332929 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/176de401-a14e-4368-9fe1-d71e0e83bd52-secret-metrics-server-tls\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.333002 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.332978 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2c6f\" (UniqueName: \"kubernetes.io/projected/176de401-a14e-4368-9fe1-d71e0e83bd52-kube-api-access-d2c6f\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.333713 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.333033 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/176de401-a14e-4368-9fe1-d71e0e83bd52-secret-metrics-server-client-certs\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.333713 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.333071 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/176de401-a14e-4368-9fe1-d71e0e83bd52-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.333713 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.333114 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/176de401-a14e-4368-9fe1-d71e0e83bd52-audit-log\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.337672 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.337635 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" event={"ID":"502c5299-00b3-4f96-81c6-b7d544cfc79a","Type":"ContainerStarted","Data":"1b231f9082ca7b720f60a3f0426d13f102f65873d3ae1aaee0dac9d55b91dc2b"} Apr 23 17:55:48.337672 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.337671 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" event={"ID":"502c5299-00b3-4f96-81c6-b7d544cfc79a","Type":"ContainerStarted","Data":"aca35591eace4fa48a8ce86bf35f8d1cd0ef6cc7ec75b6b8da5f782b6f17c8fc"} Apr 23 17:55:48.337891 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.337811 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:55:48.338986 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.338936 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9vf42" event={"ID":"be6942ff-e805-4834-ad15-79b60bde1296","Type":"ContainerStarted","Data":"1f65bbb9970882e1412083255f7ba2bbe6a58164150fcec42800ef0cb630ebd3"} Apr 23 17:55:48.340447 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.340424 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-jf562" event={"ID":"160b686c-d212-4373-af80-30f03f159c0b","Type":"ContainerStarted","Data":"2a7f9da1b11c09c6a7ce2849ce94d2b76b82d649f67073b3fcfd1037595c4460"} Apr 23 17:55:48.340543 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.340454 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-jf562" event={"ID":"160b686c-d212-4373-af80-30f03f159c0b","Type":"ContainerStarted","Data":"a474f630ee47f54041b19f753ede4db01be1f8650d69066ae83c032c309ad3a6"} Apr 23 17:55:48.341561 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.341527 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wvtqh" event={"ID":"8d651935-ade8-4ad7-91b8-d50bd718e6d8","Type":"ContainerStarted","Data":"50e2062bf2b2b3da2267ac7e28316e54c7c0eaab0a3e7944abf0b15de3e7188f"} Apr 23 17:55:48.363891 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.362892 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" podStartSLOduration=13.362874827 podStartE2EDuration="13.362874827s" podCreationTimestamp="2026-04-23 17:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:48.362094347 +0000 UTC m=+212.149833603" watchObservedRunningTime="2026-04-23 17:55:48.362874827 +0000 UTC m=+212.150614089" Apr 23 17:55:48.412457 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.412238 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn"] Apr 23 17:55:48.424521 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.424421 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn"] Apr 23 17:55:48.424650 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.424546 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn" Apr 23 17:55:48.427092 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.427059 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"monitoring-plugin-cert\"" Apr 23 17:55:48.427678 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.427487 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"default-dockercfg-29n6g\"" Apr 23 17:55:48.433540 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.433513 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/176de401-a14e-4368-9fe1-d71e0e83bd52-metrics-server-audit-profiles\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.433637 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.433623 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/176de401-a14e-4368-9fe1-d71e0e83bd52-client-ca-bundle\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.433696 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.433686 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/176de401-a14e-4368-9fe1-d71e0e83bd52-secret-metrics-server-tls\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.433753 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.433723 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d2c6f\" (UniqueName: \"kubernetes.io/projected/176de401-a14e-4368-9fe1-d71e0e83bd52-kube-api-access-d2c6f\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.433805 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.433754 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/176de401-a14e-4368-9fe1-d71e0e83bd52-secret-metrics-server-client-certs\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.433805 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.433789 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/176de401-a14e-4368-9fe1-d71e0e83bd52-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.433900 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.433890 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/176de401-a14e-4368-9fe1-d71e0e83bd52-audit-log\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.434513 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.434478 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/176de401-a14e-4368-9fe1-d71e0e83bd52-metrics-server-audit-profiles\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.435682 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.435089 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/176de401-a14e-4368-9fe1-d71e0e83bd52-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.435769 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.435756 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/176de401-a14e-4368-9fe1-d71e0e83bd52-audit-log\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.438772 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.438701 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/176de401-a14e-4368-9fe1-d71e0e83bd52-secret-metrics-server-client-certs\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.438873 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.438801 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/176de401-a14e-4368-9fe1-d71e0e83bd52-secret-metrics-server-tls\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.439533 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.439278 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/176de401-a14e-4368-9fe1-d71e0e83bd52-client-ca-bundle\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.446214 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.446175 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2c6f\" (UniqueName: \"kubernetes.io/projected/176de401-a14e-4368-9fe1-d71e0e83bd52-kube-api-access-d2c6f\") pod \"metrics-server-68b7b96464-lkmkn\" (UID: \"176de401-a14e-4368-9fe1-d71e0e83bd52\") " pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.514249 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.514218 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:55:48.534869 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.534777 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bd43d44b-8bd6-4804-97cd-46ab34ff36c0-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-hwwbn\" (UID: \"bd43d44b-8bd6-4804-97cd-46ab34ff36c0\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn" Apr 23 17:55:48.635707 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.635202 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bd43d44b-8bd6-4804-97cd-46ab34ff36c0-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-hwwbn\" (UID: \"bd43d44b-8bd6-4804-97cd-46ab34ff36c0\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn" Apr 23 17:55:48.638757 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.638710 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bd43d44b-8bd6-4804-97cd-46ab34ff36c0-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-hwwbn\" (UID: \"bd43d44b-8bd6-4804-97cd-46ab34ff36c0\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn" Apr 23 17:55:48.659394 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.659368 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68b7b96464-lkmkn"] Apr 23 17:55:48.736090 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.736017 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn" Apr 23 17:55:48.790093 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:48.790056 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod176de401_a14e_4368_9fe1_d71e0e83bd52.slice/crio-05b77eac5a32ae943a9b40b821aa67cd013648424bd590225955d04aaa5272ad WatchSource:0}: Error finding container 05b77eac5a32ae943a9b40b821aa67cd013648424bd590225955d04aaa5272ad: Status 404 returned error can't find the container with id 05b77eac5a32ae943a9b40b821aa67cd013648424bd590225955d04aaa5272ad Apr 23 17:55:48.851218 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.850739 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk"] Apr 23 17:55:48.871321 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.870722 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk"] Apr 23 17:55:48.871321 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.870887 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:48.874567 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.874142 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-client-serving-certs-ca-bundle\"" Apr 23 17:55:48.874567 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.874365 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"federate-client-certs\"" Apr 23 17:55:48.874567 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.874414 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-tls\"" Apr 23 17:55:48.874567 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.874365 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client\"" Apr 23 17:55:48.875157 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.874941 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-kube-rbac-proxy-config\"" Apr 23 17:55:48.875157 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.875009 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-dockercfg-pm9xw\"" Apr 23 17:55:48.888697 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.888397 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-trusted-ca-bundle-8i12ta5c71j38\"" Apr 23 17:55:48.943996 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:48.943950 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn"] Apr 23 17:55:49.039645 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.039614 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967f5182-e55d-45dc-b8fe-849dfa1d3b02-telemeter-trusted-ca-bundle\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.039807 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.039671 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-telemeter-client-tls\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.039807 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.039747 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-secret-telemeter-client\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.039807 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.039775 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-federate-client-tls\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.039915 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.039815 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbd8\" (UniqueName: \"kubernetes.io/projected/967f5182-e55d-45dc-b8fe-849dfa1d3b02-kube-api-access-rfbd8\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.039915 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.039854 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.039915 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.039886 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/967f5182-e55d-45dc-b8fe-849dfa1d3b02-metrics-client-ca\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.040022 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.039917 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967f5182-e55d-45dc-b8fe-849dfa1d3b02-serving-certs-ca-bundle\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.092281 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:49.092249 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd43d44b_8bd6_4804_97cd_46ab34ff36c0.slice/crio-5f03a094b917c3299029dda1ca10553e3c29af029e2cc22084e807afed062d2a WatchSource:0}: Error finding container 5f03a094b917c3299029dda1ca10553e3c29af029e2cc22084e807afed062d2a: Status 404 returned error can't find the container with id 5f03a094b917c3299029dda1ca10553e3c29af029e2cc22084e807afed062d2a Apr 23 17:55:49.141316 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.141286 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967f5182-e55d-45dc-b8fe-849dfa1d3b02-serving-certs-ca-bundle\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.141452 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.141341 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967f5182-e55d-45dc-b8fe-849dfa1d3b02-telemeter-trusted-ca-bundle\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.141452 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.141379 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-telemeter-client-tls\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.141452 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.141400 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-secret-telemeter-client\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.141452 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.141415 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-federate-client-tls\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.141452 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.141443 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbd8\" (UniqueName: \"kubernetes.io/projected/967f5182-e55d-45dc-b8fe-849dfa1d3b02-kube-api-access-rfbd8\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.141702 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.141476 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.141702 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.141513 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/967f5182-e55d-45dc-b8fe-849dfa1d3b02-metrics-client-ca\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.142396 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.142216 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967f5182-e55d-45dc-b8fe-849dfa1d3b02-serving-certs-ca-bundle\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.142396 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.142282 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/967f5182-e55d-45dc-b8fe-849dfa1d3b02-metrics-client-ca\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.142396 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.142361 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967f5182-e55d-45dc-b8fe-849dfa1d3b02-telemeter-trusted-ca-bundle\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.144970 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.144897 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-secret-telemeter-client\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.145086 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.144994 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-federate-client-tls\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.145154 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.145117 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.145201 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.145146 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/967f5182-e55d-45dc-b8fe-849dfa1d3b02-telemeter-client-tls\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.155831 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.155811 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbd8\" (UniqueName: \"kubernetes.io/projected/967f5182-e55d-45dc-b8fe-849dfa1d3b02-kube-api-access-rfbd8\") pod \"telemeter-client-85f5c4c5db-p8qfk\" (UID: \"967f5182-e55d-45dc-b8fe-849dfa1d3b02\") " pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.191852 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.191829 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" Apr 23 17:55:49.346882 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.346803 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn" event={"ID":"bd43d44b-8bd6-4804-97cd-46ab34ff36c0","Type":"ContainerStarted","Data":"5f03a094b917c3299029dda1ca10553e3c29af029e2cc22084e807afed062d2a"} Apr 23 17:55:49.348051 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.348022 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" event={"ID":"176de401-a14e-4368-9fe1-d71e0e83bd52","Type":"ContainerStarted","Data":"05b77eac5a32ae943a9b40b821aa67cd013648424bd590225955d04aaa5272ad"} Apr 23 17:55:49.993196 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:49.993141 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-67d59486d7-29n9s"] Apr 23 17:55:50.025370 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.025342 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67d59486d7-29n9s"] Apr 23 17:55:50.025593 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.025471 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.027133 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.026568 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 23 17:55:50.028251 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.028229 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 23 17:55:50.029129 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.028906 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-5lt52\"" Apr 23 17:55:50.029129 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.029045 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 23 17:55:50.030624 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.030603 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 23 17:55:50.031258 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.031212 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 23 17:55:50.031355 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.031283 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 23 17:55:50.032235 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.032212 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 23 17:55:50.035343 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.035270 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 23 17:55:50.039581 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.039560 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 23 17:55:50.044130 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.044103 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.048021 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.048001 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls-assets-0\"" Apr 23 17:55:50.048637 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.048387 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s\"" Apr 23 17:55:50.052318 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.051921 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-dockercfg-7j54z\"" Apr 23 17:55:50.052318 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.052191 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls\"" Apr 23 17:55:50.052457 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.052352 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-grpc-tls-6j8hbpofmt4ht\"" Apr 23 17:55:50.052457 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.052362 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-web-config\"" Apr 23 17:55:50.052561 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.052501 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-rbac-proxy\"" Apr 23 17:55:50.052709 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.052585 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"serving-certs-ca-bundle\"" Apr 23 17:55:50.052709 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.052596 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-kube-rbac-proxy-web\"" Apr 23 17:55:50.052709 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.052694 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-prometheus-http-client-file\"" Apr 23 17:55:50.052863 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.052763 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-sidecar-tls\"" Apr 23 17:55:50.052917 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.052881 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-client-certs\"" Apr 23 17:55:50.054298 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.054277 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 23 17:55:50.056197 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.056176 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-rulefiles-0\"" Apr 23 17:55:50.064842 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.064822 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-trusted-ca-bundle\"" Apr 23 17:55:50.148997 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.148969 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-oauth-config\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.149135 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149005 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-config\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.149135 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149053 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149135 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149090 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-oauth-serving-cert\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.149135 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149107 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149135 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149123 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54dwt\" (UniqueName: \"kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-kube-api-access-54dwt\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149306 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149138 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-serving-cert\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.149306 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149157 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149306 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149198 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149306 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149231 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-service-ca\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.149306 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149259 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149306 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149277 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149497 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149325 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149497 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149357 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149497 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149383 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149497 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149427 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149497 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149451 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149497 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149469 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149681 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149498 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-trusted-ca-bundle\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.149681 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149519 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-web-config\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149681 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149535 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6wth\" (UniqueName: \"kubernetes.io/projected/82055bc7-4a95-4051-a0c8-5f0d473f42ab-kube-api-access-r6wth\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.149681 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149549 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config-out\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149681 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149566 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149681 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149586 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.149681 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.149599 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.250123 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.250061 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.250123 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.250095 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.250123 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.250114 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.250357 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.250142 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.250357 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.250167 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.250357 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.250192 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.250357 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.250229 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-trusted-ca-bundle\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.250357 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.250254 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-web-config\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.250357 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.250278 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r6wth\" (UniqueName: \"kubernetes.io/projected/82055bc7-4a95-4051-a0c8-5f0d473f42ab-kube-api-access-r6wth\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.251116 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.250938 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251116 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251028 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config-out\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251116 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251059 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251116 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251089 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251116 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251115 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251135 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251141 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-oauth-config\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251177 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-trusted-ca-bundle\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251189 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-config\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251240 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251280 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-oauth-serving-cert\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251305 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251328 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-54dwt\" (UniqueName: \"kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-kube-api-access-54dwt\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251356 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-serving-cert\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251389 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251430 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251419 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251979 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251466 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-service-ca\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.251979 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251527 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251979 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251559 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.251979 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.251852 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-config\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.253772 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.253455 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.253772 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.253739 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-web-config\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.254109 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.254023 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-oauth-config\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.254249 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.254224 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.254388 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.254367 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-oauth-serving-cert\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.254591 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.254573 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.254591 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.254373 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-serving-cert\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.254935 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.254914 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.255414 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.255104 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.255660 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.255554 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.255660 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.255572 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-service-ca\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.255660 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.255594 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.255971 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.255898 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.257072 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.257052 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config-out\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.257235 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.257193 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.257555 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.257535 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.258026 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.258009 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.258206 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.258188 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.259048 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.259034 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6wth\" (UniqueName: \"kubernetes.io/projected/82055bc7-4a95-4051-a0c8-5f0d473f42ab-kube-api-access-r6wth\") pod \"console-67d59486d7-29n9s\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.263556 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.263537 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-54dwt\" (UniqueName: \"kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-kube-api-access-54dwt\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.264608 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.264588 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.337882 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.337855 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:55:50.359998 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.359948 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:55:50.509365 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:50.509265 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk"] Apr 23 17:55:50.797865 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:50.797798 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod967f5182_e55d_45dc_b8fe_849dfa1d3b02.slice/crio-dcf3086f6aadbd13cb1b804aa7dbcd9f105d9c26f563ae1e18444e2cf7509d83 WatchSource:0}: Error finding container dcf3086f6aadbd13cb1b804aa7dbcd9f105d9c26f563ae1e18444e2cf7509d83: Status 404 returned error can't find the container with id dcf3086f6aadbd13cb1b804aa7dbcd9f105d9c26f563ae1e18444e2cf7509d83 Apr 23 17:55:51.334088 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.333864 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67d59486d7-29n9s"] Apr 23 17:55:51.357077 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.357050 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 23 17:55:51.359483 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.359445 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-jf562" event={"ID":"160b686c-d212-4373-af80-30f03f159c0b","Type":"ContainerStarted","Data":"15c8bfa7933df9cb433db64381d486228af70223522ae1e9c7492d1680d7c26b"} Apr 23 17:55:51.366529 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.366498 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wvtqh" event={"ID":"8d651935-ade8-4ad7-91b8-d50bd718e6d8","Type":"ContainerStarted","Data":"8620cebb01eeb82b02d89c2748d426e620780ccd9d3cda4b0d83816ea51ded40"} Apr 23 17:55:51.367607 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:55:51.367571 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda59b3796_fb8a_4c8b_adc5_0a8ee932193f.slice/crio-8acb427372679bf1cca53901862576ad62c9e5129101823ca79814a4f43ebcc5 WatchSource:0}: Error finding container 8acb427372679bf1cca53901862576ad62c9e5129101823ca79814a4f43ebcc5: Status 404 returned error can't find the container with id 8acb427372679bf1cca53901862576ad62c9e5129101823ca79814a4f43ebcc5 Apr 23 17:55:51.371813 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.371613 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn" event={"ID":"bd43d44b-8bd6-4804-97cd-46ab34ff36c0","Type":"ContainerStarted","Data":"692e8e38727b5034324c6134f33804bbb1330b514f0870343e06b1ee82234431"} Apr 23 17:55:51.372305 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.372290 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn" Apr 23 17:55:51.375097 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.375075 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" event={"ID":"176de401-a14e-4368-9fe1-d71e0e83bd52","Type":"ContainerStarted","Data":"3591fbe85703922be4775444f07a97e05d09ab1161a1c9b4f222cdb56f20997f"} Apr 23 17:55:51.378286 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.378231 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d59486d7-29n9s" event={"ID":"82055bc7-4a95-4051-a0c8-5f0d473f42ab","Type":"ContainerStarted","Data":"001f81713d91a50cc628f845ed3c48ebb5bbfa8673cf6875abfc9c1625388bfc"} Apr 23 17:55:51.379207 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.379187 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn" Apr 23 17:55:51.379598 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.379557 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" event={"ID":"967f5182-e55d-45dc-b8fe-849dfa1d3b02","Type":"ContainerStarted","Data":"dcf3086f6aadbd13cb1b804aa7dbcd9f105d9c26f563ae1e18444e2cf7509d83"} Apr 23 17:55:51.386662 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.386439 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-wvtqh" podStartSLOduration=1.0900126 podStartE2EDuration="4.386422424s" podCreationTimestamp="2026-04-23 17:55:47 +0000 UTC" firstStartedPulling="2026-04-23 17:55:47.87624742 +0000 UTC m=+211.663986660" lastFinishedPulling="2026-04-23 17:55:51.172657235 +0000 UTC m=+214.960396484" observedRunningTime="2026-04-23 17:55:51.384825727 +0000 UTC m=+215.172564991" watchObservedRunningTime="2026-04-23 17:55:51.386422424 +0000 UTC m=+215.174161685" Apr 23 17:55:51.405577 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.405499 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" podStartSLOduration=1.022823488 podStartE2EDuration="3.405468848s" podCreationTimestamp="2026-04-23 17:55:48 +0000 UTC" firstStartedPulling="2026-04-23 17:55:48.792709447 +0000 UTC m=+212.580448702" lastFinishedPulling="2026-04-23 17:55:51.175354814 +0000 UTC m=+214.963094062" observedRunningTime="2026-04-23 17:55:51.404494304 +0000 UTC m=+215.192233565" watchObservedRunningTime="2026-04-23 17:55:51.405468848 +0000 UTC m=+215.193208111" Apr 23 17:55:51.424230 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:51.424027 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hwwbn" podStartSLOduration=1.350420116 podStartE2EDuration="3.424008931s" podCreationTimestamp="2026-04-23 17:55:48 +0000 UTC" firstStartedPulling="2026-04-23 17:55:49.103902708 +0000 UTC m=+212.891641947" lastFinishedPulling="2026-04-23 17:55:51.177491507 +0000 UTC m=+214.965230762" observedRunningTime="2026-04-23 17:55:51.42253424 +0000 UTC m=+215.210273501" watchObservedRunningTime="2026-04-23 17:55:51.424008931 +0000 UTC m=+215.211748192" Apr 23 17:55:52.385458 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:52.385407 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerStarted","Data":"8acb427372679bf1cca53901862576ad62c9e5129101823ca79814a4f43ebcc5"} Apr 23 17:55:52.389728 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:52.389206 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9vf42" event={"ID":"be6942ff-e805-4834-ad15-79b60bde1296","Type":"ContainerStarted","Data":"444a4312746a22d8d69dcbaf53a923c64244df4cd05cec633b869a8af4129baf"} Apr 23 17:55:52.389728 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:52.389239 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9vf42" event={"ID":"be6942ff-e805-4834-ad15-79b60bde1296","Type":"ContainerStarted","Data":"315b5c3319f143b767c008e294818433bbf2dae093d1b404bebedda262fb0add"} Apr 23 17:55:52.389728 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:52.389687 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-9vf42" Apr 23 17:55:52.411780 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:52.411567 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9vf42" podStartSLOduration=2.200446459 podStartE2EDuration="5.41154871s" podCreationTimestamp="2026-04-23 17:55:47 +0000 UTC" firstStartedPulling="2026-04-23 17:55:47.956974116 +0000 UTC m=+211.744713361" lastFinishedPulling="2026-04-23 17:55:51.168076371 +0000 UTC m=+214.955815612" observedRunningTime="2026-04-23 17:55:52.409224458 +0000 UTC m=+216.196963720" watchObservedRunningTime="2026-04-23 17:55:52.41154871 +0000 UTC m=+216.199287971" Apr 23 17:55:55.398529 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:55.398489 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-jf562" event={"ID":"160b686c-d212-4373-af80-30f03f159c0b","Type":"ContainerStarted","Data":"e2e6b39f80f26b6a863e19383d3e45f17fce073bae1e7feb9d84419106bb45c6"} Apr 23 17:55:55.399943 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:55.399921 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d59486d7-29n9s" event={"ID":"82055bc7-4a95-4051-a0c8-5f0d473f42ab","Type":"ContainerStarted","Data":"8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e"} Apr 23 17:55:55.401734 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:55.401712 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" event={"ID":"967f5182-e55d-45dc-b8fe-849dfa1d3b02","Type":"ContainerStarted","Data":"ed84b6bf85d740d984c428e18fe150426cb9ef386a7d1344ad0996a66dd839ef"} Apr 23 17:55:55.401891 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:55.401739 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" event={"ID":"967f5182-e55d-45dc-b8fe-849dfa1d3b02","Type":"ContainerStarted","Data":"05221514cd423e8144f44848ef1e10af475cdd7c1d11181ea8c75f24af93be5f"} Apr 23 17:55:55.401891 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:55.401749 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" event={"ID":"967f5182-e55d-45dc-b8fe-849dfa1d3b02","Type":"ContainerStarted","Data":"048d0f9a3530e058f0be93f54abac42235c3210080419bcb540cf33a9d0099bc"} Apr 23 17:55:55.403006 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:55.402981 2565 generic.go:358] "Generic (PLEG): container finished" podID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerID="84cf5338eae0365996730e0ddd04596e8cdf1ebd2c0d7d2aa557e7b967b53310" exitCode=0 Apr 23 17:55:55.403093 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:55.403015 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerDied","Data":"84cf5338eae0365996730e0ddd04596e8cdf1ebd2c0d7d2aa557e7b967b53310"} Apr 23 17:55:55.420291 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:55.420244 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-jf562" podStartSLOduration=1.865935839 podStartE2EDuration="8.420233385s" podCreationTimestamp="2026-04-23 17:55:47 +0000 UTC" firstStartedPulling="2026-04-23 17:55:47.975013075 +0000 UTC m=+211.762752316" lastFinishedPulling="2026-04-23 17:55:54.529310624 +0000 UTC m=+218.317049862" observedRunningTime="2026-04-23 17:55:55.418590958 +0000 UTC m=+219.206330218" watchObservedRunningTime="2026-04-23 17:55:55.420233385 +0000 UTC m=+219.207972623" Apr 23 17:55:55.441539 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:55.441502 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-85f5c4c5db-p8qfk" podStartSLOduration=3.717578075 podStartE2EDuration="7.441491293s" podCreationTimestamp="2026-04-23 17:55:48 +0000 UTC" firstStartedPulling="2026-04-23 17:55:50.799690667 +0000 UTC m=+214.587429905" lastFinishedPulling="2026-04-23 17:55:54.523603883 +0000 UTC m=+218.311343123" observedRunningTime="2026-04-23 17:55:55.440225268 +0000 UTC m=+219.227964545" watchObservedRunningTime="2026-04-23 17:55:55.441491293 +0000 UTC m=+219.229230552" Apr 23 17:55:55.469921 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:55.469885 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67d59486d7-29n9s" podStartSLOduration=3.243204336 podStartE2EDuration="6.469874156s" podCreationTimestamp="2026-04-23 17:55:49 +0000 UTC" firstStartedPulling="2026-04-23 17:55:51.346357326 +0000 UTC m=+215.134096563" lastFinishedPulling="2026-04-23 17:55:54.57302714 +0000 UTC m=+218.360766383" observedRunningTime="2026-04-23 17:55:55.469010084 +0000 UTC m=+219.256749345" watchObservedRunningTime="2026-04-23 17:55:55.469874156 +0000 UTC m=+219.257613417" Apr 23 17:55:56.165615 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:56.165578 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67d59486d7-29n9s"] Apr 23 17:55:57.288831 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:57.288794 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dghxj" Apr 23 17:55:58.414783 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:58.414748 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerStarted","Data":"df398a108058790b4ebeaa001d921136049db43c223a0f2df872bb4c949732ba"} Apr 23 17:55:58.414783 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:55:58.414786 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerStarted","Data":"4289c683674e428157a757e2731c6a16d30bbd16520435d6a4d9efebe111dbe9"} Apr 23 17:56:00.338852 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:00.338811 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:56:00.424312 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:00.424273 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerStarted","Data":"c817a835226286c94ea7be8768e82f0c5d42114bf92bfb47a806ea4252ae0bb5"} Apr 23 17:56:00.424312 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:00.424315 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerStarted","Data":"99ebcf661c243b7bb42cdfe0da79a9101b2f85972b1dc736bd6e7f3c9b3bbad5"} Apr 23 17:56:00.424494 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:00.424324 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerStarted","Data":"5dfa3d5f3f6dc851b2df3a7b02e4bd301adf22660d92f71094889428f3a84067"} Apr 23 17:56:00.424494 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:00.424332 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerStarted","Data":"74687517a3f0e0c58dab8eea64f7c8e2d3790207893df38ddbc5e1d86dec22cd"} Apr 23 17:56:00.459670 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:00.459612 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.194587268 podStartE2EDuration="11.45959806s" podCreationTimestamp="2026-04-23 17:55:49 +0000 UTC" firstStartedPulling="2026-04-23 17:55:51.37248018 +0000 UTC m=+215.160219425" lastFinishedPulling="2026-04-23 17:55:59.63749098 +0000 UTC m=+223.425230217" observedRunningTime="2026-04-23 17:56:00.457285258 +0000 UTC m=+224.245024518" watchObservedRunningTime="2026-04-23 17:56:00.45959806 +0000 UTC m=+224.247337297" Apr 23 17:56:03.395820 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:03.395786 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9vf42" Apr 23 17:56:04.681381 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:04.681342 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:56:04.683678 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:04.683661 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab8d91e1-def0-4ec9-93d5-476175cef3cd-metrics-certs\") pod \"network-metrics-daemon-4fq2j\" (UID: \"ab8d91e1-def0-4ec9-93d5-476175cef3cd\") " pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:56:04.720872 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:04.720849 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4fq2j" Apr 23 17:56:04.782100 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:04.782071 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cwt\" (UniqueName: \"kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt\") pod \"network-check-target-7n2tw\" (UID: \"133d891d-d4a9-44a1-ac6f-7a963f5616fe\") " pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:56:04.785121 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:04.785096 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2cwt\" (UniqueName: \"kubernetes.io/projected/133d891d-d4a9-44a1-ac6f-7a963f5616fe-kube-api-access-p2cwt\") pod \"network-check-target-7n2tw\" (UID: \"133d891d-d4a9-44a1-ac6f-7a963f5616fe\") " pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:56:04.838866 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:04.838841 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4fq2j"] Apr 23 17:56:04.841161 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:56:04.841132 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab8d91e1_def0_4ec9_93d5_476175cef3cd.slice/crio-d6fd1b348027eb68bbe1bb6178da5f87948a92bfc01e967fa8a172a95a31884e WatchSource:0}: Error finding container d6fd1b348027eb68bbe1bb6178da5f87948a92bfc01e967fa8a172a95a31884e: Status 404 returned error can't find the container with id d6fd1b348027eb68bbe1bb6178da5f87948a92bfc01e967fa8a172a95a31884e Apr 23 17:56:05.016395 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:05.016370 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:56:05.125934 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:05.125806 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-7n2tw"] Apr 23 17:56:05.128339 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:56:05.128300 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod133d891d_d4a9_44a1_ac6f_7a963f5616fe.slice/crio-dca1a9bb07f9ae3f0d49d550fdceb94709b4bcdc413a3ecb47b330db61508b63 WatchSource:0}: Error finding container dca1a9bb07f9ae3f0d49d550fdceb94709b4bcdc413a3ecb47b330db61508b63: Status 404 returned error can't find the container with id dca1a9bb07f9ae3f0d49d550fdceb94709b4bcdc413a3ecb47b330db61508b63 Apr 23 17:56:05.360689 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:05.360597 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:56:05.442914 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:05.442864 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4fq2j" event={"ID":"ab8d91e1-def0-4ec9-93d5-476175cef3cd","Type":"ContainerStarted","Data":"d6fd1b348027eb68bbe1bb6178da5f87948a92bfc01e967fa8a172a95a31884e"} Apr 23 17:56:05.444008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:05.443976 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-7n2tw" event={"ID":"133d891d-d4a9-44a1-ac6f-7a963f5616fe","Type":"ContainerStarted","Data":"dca1a9bb07f9ae3f0d49d550fdceb94709b4bcdc413a3ecb47b330db61508b63"} Apr 23 17:56:06.449587 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:06.449551 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4fq2j" event={"ID":"ab8d91e1-def0-4ec9-93d5-476175cef3cd","Type":"ContainerStarted","Data":"3241c530554f63448c7fc5d646be908c2a57e7972f16110201e2a9b1da13330f"} Apr 23 17:56:07.455481 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:07.455443 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4fq2j" event={"ID":"ab8d91e1-def0-4ec9-93d5-476175cef3cd","Type":"ContainerStarted","Data":"78fcad39f81c0021c944c23a78142c2b66ae059c3313c0af950fa15a35bec60e"} Apr 23 17:56:07.474633 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:07.474551 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4fq2j" podStartSLOduration=112.020682889 podStartE2EDuration="1m53.474532528s" podCreationTimestamp="2026-04-23 17:54:14 +0000 UTC" firstStartedPulling="2026-04-23 17:56:04.842991789 +0000 UTC m=+228.630731031" lastFinishedPulling="2026-04-23 17:56:06.296841418 +0000 UTC m=+230.084580670" observedRunningTime="2026-04-23 17:56:07.473541476 +0000 UTC m=+231.261280737" watchObservedRunningTime="2026-04-23 17:56:07.474532528 +0000 UTC m=+231.262271791" Apr 23 17:56:08.459652 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:08.459615 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-7n2tw" event={"ID":"133d891d-d4a9-44a1-ac6f-7a963f5616fe","Type":"ContainerStarted","Data":"f9a81018a9855bc9c11163b53cf69cda6b9f7885d7ad36827fd7a1a611da0cc6"} Apr 23 17:56:08.460033 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:08.459794 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:56:08.477844 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:08.477802 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-7n2tw" podStartSLOduration=111.799654768 podStartE2EDuration="1m54.477790094s" podCreationTimestamp="2026-04-23 17:54:14 +0000 UTC" firstStartedPulling="2026-04-23 17:56:05.130400656 +0000 UTC m=+228.918139895" lastFinishedPulling="2026-04-23 17:56:07.80853598 +0000 UTC m=+231.596275221" observedRunningTime="2026-04-23 17:56:08.476888441 +0000 UTC m=+232.264627700" watchObservedRunningTime="2026-04-23 17:56:08.477790094 +0000 UTC m=+232.265529373" Apr 23 17:56:08.515147 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:08.515115 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:56:08.515263 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:08.515156 2565 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:56:09.352479 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:09.352451 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-54dc5c8cc9-q4z6v" Apr 23 17:56:22.430118 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.430073 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-67d59486d7-29n9s" podUID="82055bc7-4a95-4051-a0c8-5f0d473f42ab" containerName="console" containerID="cri-o://8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e" gracePeriod=15 Apr 23 17:56:22.718051 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.718031 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67d59486d7-29n9s_82055bc7-4a95-4051-a0c8-5f0d473f42ab/console/0.log" Apr 23 17:56:22.718158 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.718100 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:56:22.820153 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820125 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-oauth-config\") pod \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " Apr 23 17:56:22.820320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820179 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-config\") pod \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " Apr 23 17:56:22.820320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820202 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-oauth-serving-cert\") pod \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " Apr 23 17:56:22.820320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820226 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-trusted-ca-bundle\") pod \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " Apr 23 17:56:22.820320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820264 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-service-ca\") pod \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " Apr 23 17:56:22.820320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820295 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-serving-cert\") pod \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " Apr 23 17:56:22.820320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820319 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6wth\" (UniqueName: \"kubernetes.io/projected/82055bc7-4a95-4051-a0c8-5f0d473f42ab-kube-api-access-r6wth\") pod \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\" (UID: \"82055bc7-4a95-4051-a0c8-5f0d473f42ab\") " Apr 23 17:56:22.820614 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820587 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-config" (OuterVolumeSpecName: "console-config") pod "82055bc7-4a95-4051-a0c8-5f0d473f42ab" (UID: "82055bc7-4a95-4051-a0c8-5f0d473f42ab"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:22.820853 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820824 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-service-ca" (OuterVolumeSpecName: "service-ca") pod "82055bc7-4a95-4051-a0c8-5f0d473f42ab" (UID: "82055bc7-4a95-4051-a0c8-5f0d473f42ab"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:22.820994 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820875 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "82055bc7-4a95-4051-a0c8-5f0d473f42ab" (UID: "82055bc7-4a95-4051-a0c8-5f0d473f42ab"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:22.820994 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.820897 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "82055bc7-4a95-4051-a0c8-5f0d473f42ab" (UID: "82055bc7-4a95-4051-a0c8-5f0d473f42ab"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:22.822389 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.822363 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "82055bc7-4a95-4051-a0c8-5f0d473f42ab" (UID: "82055bc7-4a95-4051-a0c8-5f0d473f42ab"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:22.822655 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.822632 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82055bc7-4a95-4051-a0c8-5f0d473f42ab-kube-api-access-r6wth" (OuterVolumeSpecName: "kube-api-access-r6wth") pod "82055bc7-4a95-4051-a0c8-5f0d473f42ab" (UID: "82055bc7-4a95-4051-a0c8-5f0d473f42ab"). InnerVolumeSpecName "kube-api-access-r6wth". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:22.822699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.822674 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "82055bc7-4a95-4051-a0c8-5f0d473f42ab" (UID: "82055bc7-4a95-4051-a0c8-5f0d473f42ab"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:22.921205 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.921184 2565 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-oauth-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:56:22.921292 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.921207 2565 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:56:22.921292 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.921216 2565 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-oauth-serving-cert\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:56:22.921292 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.921226 2565 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-trusted-ca-bundle\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:56:22.921292 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.921234 2565 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82055bc7-4a95-4051-a0c8-5f0d473f42ab-service-ca\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:56:22.921292 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.921243 2565 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82055bc7-4a95-4051-a0c8-5f0d473f42ab-console-serving-cert\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:56:22.921292 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:22.921251 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r6wth\" (UniqueName: \"kubernetes.io/projected/82055bc7-4a95-4051-a0c8-5f0d473f42ab-kube-api-access-r6wth\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:56:23.502468 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:23.502440 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67d59486d7-29n9s_82055bc7-4a95-4051-a0c8-5f0d473f42ab/console/0.log" Apr 23 17:56:23.502842 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:23.502487 2565 generic.go:358] "Generic (PLEG): container finished" podID="82055bc7-4a95-4051-a0c8-5f0d473f42ab" containerID="8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e" exitCode=2 Apr 23 17:56:23.502842 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:23.502547 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d59486d7-29n9s" event={"ID":"82055bc7-4a95-4051-a0c8-5f0d473f42ab","Type":"ContainerDied","Data":"8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e"} Apr 23 17:56:23.502842 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:23.502570 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d59486d7-29n9s" event={"ID":"82055bc7-4a95-4051-a0c8-5f0d473f42ab","Type":"ContainerDied","Data":"001f81713d91a50cc628f845ed3c48ebb5bbfa8673cf6875abfc9c1625388bfc"} Apr 23 17:56:23.502842 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:23.502577 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d59486d7-29n9s" Apr 23 17:56:23.502842 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:23.502587 2565 scope.go:117] "RemoveContainer" containerID="8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e" Apr 23 17:56:23.521003 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:23.520980 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67d59486d7-29n9s"] Apr 23 17:56:23.525532 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:23.525508 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-67d59486d7-29n9s"] Apr 23 17:56:23.527125 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:23.527106 2565 scope.go:117] "RemoveContainer" containerID="8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e" Apr 23 17:56:23.527397 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:56:23.527376 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e\": container with ID starting with 8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e not found: ID does not exist" containerID="8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e" Apr 23 17:56:23.527441 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:23.527407 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e"} err="failed to get container status \"8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e\": rpc error: code = NotFound desc = could not find container \"8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e\": container with ID starting with 8d017bb276c8ea3943e1af9fb6daba2842d39c3ce05698c91c95608a5a0abe6e not found: ID does not exist" Apr 23 17:56:24.910699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:24.910667 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82055bc7-4a95-4051-a0c8-5f0d473f42ab" path="/var/lib/kubelet/pods/82055bc7-4a95-4051-a0c8-5f0d473f42ab/volumes" Apr 23 17:56:28.519987 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:28.519871 2565 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:56:28.523773 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:28.523752 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-68b7b96464-lkmkn" Apr 23 17:56:39.465043 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:39.465011 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-7n2tw" Apr 23 17:56:50.360460 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:50.360423 2565 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:56:50.381856 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:50.381833 2565 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:56:50.593733 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:56:50.593710 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:08.400126 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.400081 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 23 17:57:08.400725 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.400607 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="prometheus" containerID="cri-o://4289c683674e428157a757e2731c6a16d30bbd16520435d6a4d9efebe111dbe9" gracePeriod=600 Apr 23 17:57:08.400725 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.400659 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="thanos-sidecar" containerID="cri-o://74687517a3f0e0c58dab8eea64f7c8e2d3790207893df38ddbc5e1d86dec22cd" gracePeriod=600 Apr 23 17:57:08.400725 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.400643 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy" containerID="cri-o://99ebcf661c243b7bb42cdfe0da79a9101b2f85972b1dc736bd6e7f3c9b3bbad5" gracePeriod=600 Apr 23 17:57:08.400725 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.400675 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy-thanos" containerID="cri-o://c817a835226286c94ea7be8768e82f0c5d42114bf92bfb47a806ea4252ae0bb5" gracePeriod=600 Apr 23 17:57:08.400924 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.400698 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy-web" containerID="cri-o://5dfa3d5f3f6dc851b2df3a7b02e4bd301adf22660d92f71094889428f3a84067" gracePeriod=600 Apr 23 17:57:08.401005 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.400745 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="config-reloader" containerID="cri-o://df398a108058790b4ebeaa001d921136049db43c223a0f2df872bb4c949732ba" gracePeriod=600 Apr 23 17:57:08.631524 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631486 2565 generic.go:358] "Generic (PLEG): container finished" podID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerID="c817a835226286c94ea7be8768e82f0c5d42114bf92bfb47a806ea4252ae0bb5" exitCode=0 Apr 23 17:57:08.631524 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631517 2565 generic.go:358] "Generic (PLEG): container finished" podID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerID="99ebcf661c243b7bb42cdfe0da79a9101b2f85972b1dc736bd6e7f3c9b3bbad5" exitCode=0 Apr 23 17:57:08.631524 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631525 2565 generic.go:358] "Generic (PLEG): container finished" podID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerID="5dfa3d5f3f6dc851b2df3a7b02e4bd301adf22660d92f71094889428f3a84067" exitCode=0 Apr 23 17:57:08.631524 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631532 2565 generic.go:358] "Generic (PLEG): container finished" podID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerID="74687517a3f0e0c58dab8eea64f7c8e2d3790207893df38ddbc5e1d86dec22cd" exitCode=0 Apr 23 17:57:08.631789 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631540 2565 generic.go:358] "Generic (PLEG): container finished" podID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerID="df398a108058790b4ebeaa001d921136049db43c223a0f2df872bb4c949732ba" exitCode=0 Apr 23 17:57:08.631789 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631546 2565 generic.go:358] "Generic (PLEG): container finished" podID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerID="4289c683674e428157a757e2731c6a16d30bbd16520435d6a4d9efebe111dbe9" exitCode=0 Apr 23 17:57:08.631789 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631582 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerDied","Data":"c817a835226286c94ea7be8768e82f0c5d42114bf92bfb47a806ea4252ae0bb5"} Apr 23 17:57:08.631789 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631676 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerDied","Data":"99ebcf661c243b7bb42cdfe0da79a9101b2f85972b1dc736bd6e7f3c9b3bbad5"} Apr 23 17:57:08.631789 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631692 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerDied","Data":"5dfa3d5f3f6dc851b2df3a7b02e4bd301adf22660d92f71094889428f3a84067"} Apr 23 17:57:08.631789 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631705 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerDied","Data":"74687517a3f0e0c58dab8eea64f7c8e2d3790207893df38ddbc5e1d86dec22cd"} Apr 23 17:57:08.631789 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631717 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerDied","Data":"df398a108058790b4ebeaa001d921136049db43c223a0f2df872bb4c949732ba"} Apr 23 17:57:08.631789 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.631731 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerDied","Data":"4289c683674e428157a757e2731c6a16d30bbd16520435d6a4d9efebe111dbe9"} Apr 23 17:57:08.642090 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.642068 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:08.760391 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760366 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-tls\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760559 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760400 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config-out\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760559 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760428 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-rulefiles-0\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760559 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760457 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-grpc-tls\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760678 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760569 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-serving-certs-ca-bundle\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760678 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760611 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-tls-assets\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760678 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760641 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-metrics-client-certs\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760678 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760669 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-web-config\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760696 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-trusted-ca-bundle\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760721 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54dwt\" (UniqueName: \"kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-kube-api-access-54dwt\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760757 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-kube-rbac-proxy\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760784 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760811 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-metrics-client-ca\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760850 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-thanos-prometheus-http-client-file\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760889 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-db\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.760937 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760924 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.761320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.760988 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.761320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.761028 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-kubelet-serving-ca-bundle\") pod \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\" (UID: \"a59b3796-fb8a-4c8b-adc5-0a8ee932193f\") " Apr 23 17:57:08.761320 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.761102 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:08.761456 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.761405 2565 reconciler_common.go:299] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-serving-certs-ca-bundle\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.761514 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.761454 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:08.762250 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.762205 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:08.763585 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.763234 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:57:08.763585 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.763405 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:08.763585 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.763554 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:08.763811 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.763608 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:08.763973 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.763909 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:08.764389 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.764363 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:08.764474 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.764412 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:08.764538 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.764507 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config" (OuterVolumeSpecName: "config") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:08.764754 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.764730 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:08.764825 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.764808 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config-out" (OuterVolumeSpecName: "config-out") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:57:08.765591 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.765570 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:57:08.765860 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.765839 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-kube-api-access-54dwt" (OuterVolumeSpecName: "kube-api-access-54dwt") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "kube-api-access-54dwt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:57:08.766077 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.766061 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:08.766339 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.766326 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:08.775022 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.774997 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-web-config" (OuterVolumeSpecName: "web-config") pod "a59b3796-fb8a-4c8b-adc5-0a8ee932193f" (UID: "a59b3796-fb8a-4c8b-adc5-0a8ee932193f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:08.862495 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862445 2565 reconciler_common.go:299] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862495 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862492 2565 reconciler_common.go:299] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-kubelet-serving-ca-bundle\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862495 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862505 2565 reconciler_common.go:299] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862516 2565 reconciler_common.go:299] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config-out\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862525 2565 reconciler_common.go:299] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-rulefiles-0\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862534 2565 reconciler_common.go:299] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-grpc-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862542 2565 reconciler_common.go:299] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-tls-assets\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862550 2565 reconciler_common.go:299] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-metrics-client-certs\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862559 2565 reconciler_common.go:299] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-web-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862567 2565 reconciler_common.go:299] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-trusted-ca-bundle\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862576 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-54dwt\" (UniqueName: \"kubernetes.io/projected/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-kube-api-access-54dwt\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862585 2565 reconciler_common.go:299] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-kube-rbac-proxy\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862594 2565 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862602 2565 reconciler_common.go:299] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-configmap-metrics-client-ca\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862611 2565 reconciler_common.go:299] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-thanos-prometheus-http-client-file\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862620 2565 reconciler_common.go:299] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-prometheus-k8s-db\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:08.862700 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:08.862628 2565 reconciler_common.go:299] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a59b3796-fb8a-4c8b-adc5-0a8ee932193f-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 17:57:09.637135 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.637095 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a59b3796-fb8a-4c8b-adc5-0a8ee932193f","Type":"ContainerDied","Data":"8acb427372679bf1cca53901862576ad62c9e5129101823ca79814a4f43ebcc5"} Apr 23 17:57:09.637560 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.637153 2565 scope.go:117] "RemoveContainer" containerID="c817a835226286c94ea7be8768e82f0c5d42114bf92bfb47a806ea4252ae0bb5" Apr 23 17:57:09.637560 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.637206 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.646670 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.646648 2565 scope.go:117] "RemoveContainer" containerID="99ebcf661c243b7bb42cdfe0da79a9101b2f85972b1dc736bd6e7f3c9b3bbad5" Apr 23 17:57:09.653422 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.653390 2565 scope.go:117] "RemoveContainer" containerID="5dfa3d5f3f6dc851b2df3a7b02e4bd301adf22660d92f71094889428f3a84067" Apr 23 17:57:09.659971 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.659932 2565 scope.go:117] "RemoveContainer" containerID="74687517a3f0e0c58dab8eea64f7c8e2d3790207893df38ddbc5e1d86dec22cd" Apr 23 17:57:09.660910 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.660894 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 23 17:57:09.664417 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.664365 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 23 17:57:09.666326 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.666306 2565 scope.go:117] "RemoveContainer" containerID="df398a108058790b4ebeaa001d921136049db43c223a0f2df872bb4c949732ba" Apr 23 17:57:09.672337 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.672320 2565 scope.go:117] "RemoveContainer" containerID="4289c683674e428157a757e2731c6a16d30bbd16520435d6a4d9efebe111dbe9" Apr 23 17:57:09.678694 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.678678 2565 scope.go:117] "RemoveContainer" containerID="84cf5338eae0365996730e0ddd04596e8cdf1ebd2c0d7d2aa557e7b967b53310" Apr 23 17:57:09.689161 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689138 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 23 17:57:09.689428 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689413 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="prometheus" Apr 23 17:57:09.689504 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689432 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="prometheus" Apr 23 17:57:09.689504 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689444 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy-thanos" Apr 23 17:57:09.689504 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689451 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy-thanos" Apr 23 17:57:09.689504 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689464 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="config-reloader" Apr 23 17:57:09.689504 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689473 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="config-reloader" Apr 23 17:57:09.689504 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689485 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy" Apr 23 17:57:09.689504 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689493 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689513 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="thanos-sidecar" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689521 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="thanos-sidecar" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689534 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="init-config-reloader" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689543 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="init-config-reloader" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689556 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82055bc7-4a95-4051-a0c8-5f0d473f42ab" containerName="console" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689564 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="82055bc7-4a95-4051-a0c8-5f0d473f42ab" containerName="console" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689573 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy-web" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689582 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy-web" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689643 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy-thanos" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689655 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689665 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="thanos-sidecar" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689676 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="82055bc7-4a95-4051-a0c8-5f0d473f42ab" containerName="console" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689685 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="prometheus" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689695 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="config-reloader" Apr 23 17:57:09.689836 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.689704 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" containerName="kube-rbac-proxy-web" Apr 23 17:57:09.695029 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.695012 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.697732 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.697713 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls-assets-0\"" Apr 23 17:57:09.697816 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.697733 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s\"" Apr 23 17:57:09.697816 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.697755 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-kube-rbac-proxy-web\"" Apr 23 17:57:09.697816 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.697799 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-web-config\"" Apr 23 17:57:09.697995 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.697860 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-prometheus-http-client-file\"" Apr 23 17:57:09.698041 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.697997 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"serving-certs-ca-bundle\"" Apr 23 17:57:09.698090 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.698075 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls\"" Apr 23 17:57:09.698127 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.698103 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-rbac-proxy\"" Apr 23 17:57:09.698254 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.698239 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-client-certs\"" Apr 23 17:57:09.698590 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.698573 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-dockercfg-7j54z\"" Apr 23 17:57:09.698682 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.698575 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-sidecar-tls\"" Apr 23 17:57:09.698682 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.698611 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-grpc-tls-6j8hbpofmt4ht\"" Apr 23 17:57:09.701779 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.701668 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-rulefiles-0\"" Apr 23 17:57:09.704053 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.703886 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-trusted-ca-bundle\"" Apr 23 17:57:09.705677 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.705655 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 23 17:57:09.869604 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869556 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.869604 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869602 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.869821 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869628 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.869821 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869653 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-config\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.869821 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869674 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.869821 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869739 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbdg6\" (UniqueName: \"kubernetes.io/projected/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-kube-api-access-vbdg6\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.869821 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869800 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870083 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869836 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870083 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869876 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870083 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869909 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870083 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869941 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-config-out\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870083 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.869981 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870083 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.870028 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870083 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.870063 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870325 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.870093 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-web-config\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870325 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.870110 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870325 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.870154 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.870325 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.870178 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971116 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971031 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbdg6\" (UniqueName: \"kubernetes.io/projected/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-kube-api-access-vbdg6\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971116 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971070 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971116 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971086 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971342 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971282 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971342 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971326 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971491 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971380 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-config-out\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971491 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971417 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971491 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971448 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971491 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971481 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971536 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-web-config\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971560 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971597 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971625 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971666 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.971699 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971693 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.972030 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971721 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.972030 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971756 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-config\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.972030 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.971783 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.972030 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.972016 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.972559 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.972535 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.973322 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.973299 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.974257 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.974231 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-config-out\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.974352 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.974297 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.974352 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.974300 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.974553 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.974531 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.974756 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.974733 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.974817 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.974736 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.975211 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.975137 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.976309 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.975761 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.976309 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.976271 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.976977 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.976940 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-config\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.977145 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.977127 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-web-config\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.977203 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.977159 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.977250 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.977230 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.977455 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.977441 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:09.979468 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:09.979446 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbdg6\" (UniqueName: \"kubernetes.io/projected/b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4-kube-api-access-vbdg6\") pod \"prometheus-k8s-0\" (UID: \"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:10.004966 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:10.004921 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:10.129496 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:10.129398 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 23 17:57:10.641883 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:10.641845 2565 generic.go:358] "Generic (PLEG): container finished" podID="b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4" containerID="b0e4c8e44f9ffc17aa6a2afa8c1ffbf6f1aa755f7454ca17033f5358fd234889" exitCode=0 Apr 23 17:57:10.642299 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:10.641935 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4","Type":"ContainerDied","Data":"b0e4c8e44f9ffc17aa6a2afa8c1ffbf6f1aa755f7454ca17033f5358fd234889"} Apr 23 17:57:10.642299 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:10.641984 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4","Type":"ContainerStarted","Data":"79b04c4ad35e96805f4e5a647586e6393c751272a0040344beeceb32b1719a83"} Apr 23 17:57:10.912319 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:10.912295 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59b3796-fb8a-4c8b-adc5-0a8ee932193f" path="/var/lib/kubelet/pods/a59b3796-fb8a-4c8b-adc5-0a8ee932193f/volumes" Apr 23 17:57:11.648084 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:11.648049 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4","Type":"ContainerStarted","Data":"6c6934e68cc81b9a362011810a005a8d8db68f624ecd93c469ace872776baca3"} Apr 23 17:57:11.648084 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:11.648085 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4","Type":"ContainerStarted","Data":"4433519a0dc16235cf7284b90262e3192b6f779b7aa41a6d88a634baccf83638"} Apr 23 17:57:11.648084 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:11.648095 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4","Type":"ContainerStarted","Data":"eb1da2f718ce028b56230725f608e549cfabe949241666bfc008361773f88c1e"} Apr 23 17:57:11.648516 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:11.648103 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4","Type":"ContainerStarted","Data":"b9bb24b01174062cb659be5dbda98fefc3ad0da818c7366ac68f897089858224"} Apr 23 17:57:11.648516 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:11.648111 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4","Type":"ContainerStarted","Data":"842fb849be2b101c3e74a604b0c1053625f12398bdc2ef763691005048544525"} Apr 23 17:57:11.648516 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:11.648119 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4","Type":"ContainerStarted","Data":"f70cd337b1ae649811978fef0bad454c1928446e62698f19915eead48d4763e8"} Apr 23 17:57:11.680379 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:11.680326 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=2.680310238 podStartE2EDuration="2.680310238s" podCreationTimestamp="2026-04-23 17:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:57:11.678430883 +0000 UTC m=+295.466170144" watchObservedRunningTime="2026-04-23 17:57:11.680310238 +0000 UTC m=+295.468049495" Apr 23 17:57:14.116731 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.116693 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-78fbd5ddc7-b64lb"] Apr 23 17:57:14.120204 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.120179 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.123455 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.123434 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 23 17:57:14.124715 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.124687 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 23 17:57:14.124825 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.124726 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 23 17:57:14.124825 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.124758 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 23 17:57:14.124825 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.124691 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 23 17:57:14.125008 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.124691 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 23 17:57:14.125144 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.125129 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-5lt52\"" Apr 23 17:57:14.125193 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.125133 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 23 17:57:14.128648 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.128630 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 23 17:57:14.131102 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.131081 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78fbd5ddc7-b64lb"] Apr 23 17:57:14.205500 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.205469 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-console-config\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.205500 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.205504 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6nz\" (UniqueName: \"kubernetes.io/projected/6cdea478-ef91-4fff-8369-c5e45afbb677-kube-api-access-fv6nz\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.205741 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.205522 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-service-ca\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.205741 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.205544 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-trusted-ca-bundle\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.205741 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.205595 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-oauth-config\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.205741 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.205620 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-oauth-serving-cert\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.205741 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.205690 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-serving-cert\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.307099 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.307063 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-oauth-config\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.307099 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.307105 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-oauth-serving-cert\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.307340 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.307142 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-serving-cert\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.307340 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.307222 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-console-config\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.307340 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.307247 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6nz\" (UniqueName: \"kubernetes.io/projected/6cdea478-ef91-4fff-8369-c5e45afbb677-kube-api-access-fv6nz\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.307340 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.307271 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-service-ca\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.307340 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.307296 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-trusted-ca-bundle\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.307938 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.307915 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-console-config\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.308074 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.307937 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-oauth-serving-cert\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.308074 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.308050 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-service-ca\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.308251 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.308231 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-trusted-ca-bundle\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.309531 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.309510 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-oauth-config\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.309619 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.309548 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-serving-cert\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.315450 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.315427 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6nz\" (UniqueName: \"kubernetes.io/projected/6cdea478-ef91-4fff-8369-c5e45afbb677-kube-api-access-fv6nz\") pod \"console-78fbd5ddc7-b64lb\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.430169 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.430081 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:14.550814 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.550780 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78fbd5ddc7-b64lb"] Apr 23 17:57:14.553715 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:57:14.553685 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cdea478_ef91_4fff_8369_c5e45afbb677.slice/crio-6c5b414d01fb1caa7bfb0534297ead6da51db46966d861d01f8fec9af1d72921 WatchSource:0}: Error finding container 6c5b414d01fb1caa7bfb0534297ead6da51db46966d861d01f8fec9af1d72921: Status 404 returned error can't find the container with id 6c5b414d01fb1caa7bfb0534297ead6da51db46966d861d01f8fec9af1d72921 Apr 23 17:57:14.659845 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.659796 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78fbd5ddc7-b64lb" event={"ID":"6cdea478-ef91-4fff-8369-c5e45afbb677","Type":"ContainerStarted","Data":"9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac"} Apr 23 17:57:14.659845 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.659835 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78fbd5ddc7-b64lb" event={"ID":"6cdea478-ef91-4fff-8369-c5e45afbb677","Type":"ContainerStarted","Data":"6c5b414d01fb1caa7bfb0534297ead6da51db46966d861d01f8fec9af1d72921"} Apr 23 17:57:14.679864 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:14.679809 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-78fbd5ddc7-b64lb" podStartSLOduration=0.679789275 podStartE2EDuration="679.789275ms" podCreationTimestamp="2026-04-23 17:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:57:14.679038851 +0000 UTC m=+298.466778111" watchObservedRunningTime="2026-04-23 17:57:14.679789275 +0000 UTC m=+298.467528538" Apr 23 17:57:15.005452 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:15.005401 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:57:16.786948 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:16.786922 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 17:57:16.787307 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:16.786998 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 17:57:16.789025 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:16.789008 2565 kubelet.go:1628] "Image garbage collection succeeded" Apr 23 17:57:24.431310 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:24.431273 2565 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:24.431310 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:24.431316 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:24.436211 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:24.436189 2565 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:24.694780 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:24.694703 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 17:57:33.966352 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:33.966319 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-9765b"] Apr 23 17:57:33.969360 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:33.969344 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:33.971880 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:33.971862 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:57:33.977661 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:33.977636 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-9765b"] Apr 23 17:57:34.048361 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.048317 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/9743455b-0c0a-49c2-9dd5-2e99d372e2c4-kubelet-config\") pod \"global-pull-secret-syncer-9765b\" (UID: \"9743455b-0c0a-49c2-9dd5-2e99d372e2c4\") " pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:34.048537 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.048428 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/9743455b-0c0a-49c2-9dd5-2e99d372e2c4-dbus\") pod \"global-pull-secret-syncer-9765b\" (UID: \"9743455b-0c0a-49c2-9dd5-2e99d372e2c4\") " pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:34.048537 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.048464 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/9743455b-0c0a-49c2-9dd5-2e99d372e2c4-original-pull-secret\") pod \"global-pull-secret-syncer-9765b\" (UID: \"9743455b-0c0a-49c2-9dd5-2e99d372e2c4\") " pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:34.149188 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.149156 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/9743455b-0c0a-49c2-9dd5-2e99d372e2c4-dbus\") pod \"global-pull-secret-syncer-9765b\" (UID: \"9743455b-0c0a-49c2-9dd5-2e99d372e2c4\") " pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:34.149332 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.149194 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/9743455b-0c0a-49c2-9dd5-2e99d372e2c4-original-pull-secret\") pod \"global-pull-secret-syncer-9765b\" (UID: \"9743455b-0c0a-49c2-9dd5-2e99d372e2c4\") " pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:34.149332 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.149230 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/9743455b-0c0a-49c2-9dd5-2e99d372e2c4-kubelet-config\") pod \"global-pull-secret-syncer-9765b\" (UID: \"9743455b-0c0a-49c2-9dd5-2e99d372e2c4\") " pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:34.149332 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.149322 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/9743455b-0c0a-49c2-9dd5-2e99d372e2c4-kubelet-config\") pod \"global-pull-secret-syncer-9765b\" (UID: \"9743455b-0c0a-49c2-9dd5-2e99d372e2c4\") " pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:34.149490 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.149343 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/9743455b-0c0a-49c2-9dd5-2e99d372e2c4-dbus\") pod \"global-pull-secret-syncer-9765b\" (UID: \"9743455b-0c0a-49c2-9dd5-2e99d372e2c4\") " pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:34.151510 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.151489 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/9743455b-0c0a-49c2-9dd5-2e99d372e2c4-original-pull-secret\") pod \"global-pull-secret-syncer-9765b\" (UID: \"9743455b-0c0a-49c2-9dd5-2e99d372e2c4\") " pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:34.278542 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.278506 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-9765b" Apr 23 17:57:34.394796 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.394759 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-9765b"] Apr 23 17:57:34.397493 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:57:34.397469 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9743455b_0c0a_49c2_9dd5_2e99d372e2c4.slice/crio-eacb4bd1b5b0fa088c94b8d12c41679316247d796124559dfbec6989e7a4922b WatchSource:0}: Error finding container eacb4bd1b5b0fa088c94b8d12c41679316247d796124559dfbec6989e7a4922b: Status 404 returned error can't find the container with id eacb4bd1b5b0fa088c94b8d12c41679316247d796124559dfbec6989e7a4922b Apr 23 17:57:34.399091 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.399072 2565 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:57:34.718546 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:34.718467 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-9765b" event={"ID":"9743455b-0c0a-49c2-9dd5-2e99d372e2c4","Type":"ContainerStarted","Data":"eacb4bd1b5b0fa088c94b8d12c41679316247d796124559dfbec6989e7a4922b"} Apr 23 17:57:38.734258 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:38.734217 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-9765b" event={"ID":"9743455b-0c0a-49c2-9dd5-2e99d372e2c4","Type":"ContainerStarted","Data":"24a412a0ae1aa86fa444e83e37db226a36efc47da212644d29099115f30c932b"} Apr 23 17:57:38.750028 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:57:38.749971 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-9765b" podStartSLOduration=2.358700705 podStartE2EDuration="5.749940632s" podCreationTimestamp="2026-04-23 17:57:33 +0000 UTC" firstStartedPulling="2026-04-23 17:57:34.399196804 +0000 UTC m=+318.186936043" lastFinishedPulling="2026-04-23 17:57:37.790436725 +0000 UTC m=+321.578175970" observedRunningTime="2026-04-23 17:57:38.749175711 +0000 UTC m=+322.536914971" watchObservedRunningTime="2026-04-23 17:57:38.749940632 +0000 UTC m=+322.537679927" Apr 23 17:58:10.005819 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:10.005720 2565 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:58:10.020806 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:10.020782 2565 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:58:10.839891 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:10.839858 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 23 17:58:17.386101 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.386062 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk"] Apr 23 17:58:17.390654 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.390633 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" Apr 23 17:58:17.393326 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.393302 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"custom-metrics-autoscaler-operator-dockercfg-kwj7m\"" Apr 23 17:58:17.393681 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.393661 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 23 17:58:17.393793 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.393732 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 23 17:58:17.393793 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.393737 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 23 17:58:17.400537 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.400499 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk"] Apr 23 17:58:17.489735 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.489697 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/b756e2ab-fa83-4a3a-bdfd-46cd754d87e2-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk\" (UID: \"b756e2ab-fa83-4a3a-bdfd-46cd754d87e2\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" Apr 23 17:58:17.489735 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.489740 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twxx6\" (UniqueName: \"kubernetes.io/projected/b756e2ab-fa83-4a3a-bdfd-46cd754d87e2-kube-api-access-twxx6\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk\" (UID: \"b756e2ab-fa83-4a3a-bdfd-46cd754d87e2\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" Apr 23 17:58:17.590255 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.590219 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/b756e2ab-fa83-4a3a-bdfd-46cd754d87e2-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk\" (UID: \"b756e2ab-fa83-4a3a-bdfd-46cd754d87e2\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" Apr 23 17:58:17.590399 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.590275 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-twxx6\" (UniqueName: \"kubernetes.io/projected/b756e2ab-fa83-4a3a-bdfd-46cd754d87e2-kube-api-access-twxx6\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk\" (UID: \"b756e2ab-fa83-4a3a-bdfd-46cd754d87e2\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" Apr 23 17:58:17.592519 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.592495 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/b756e2ab-fa83-4a3a-bdfd-46cd754d87e2-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk\" (UID: \"b756e2ab-fa83-4a3a-bdfd-46cd754d87e2\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" Apr 23 17:58:17.598914 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.598894 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-twxx6\" (UniqueName: \"kubernetes.io/projected/b756e2ab-fa83-4a3a-bdfd-46cd754d87e2-kube-api-access-twxx6\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk\" (UID: \"b756e2ab-fa83-4a3a-bdfd-46cd754d87e2\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" Apr 23 17:58:17.702416 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.702346 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" Apr 23 17:58:17.827169 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.827137 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk"] Apr 23 17:58:17.830171 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:58:17.830142 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb756e2ab_fa83_4a3a_bdfd_46cd754d87e2.slice/crio-ee5eea2bc07af4a1fc68d3ecbf81c6c73e5c98e8862596d52ff1b2ee59f8ce22 WatchSource:0}: Error finding container ee5eea2bc07af4a1fc68d3ecbf81c6c73e5c98e8862596d52ff1b2ee59f8ce22: Status 404 returned error can't find the container with id ee5eea2bc07af4a1fc68d3ecbf81c6c73e5c98e8862596d52ff1b2ee59f8ce22 Apr 23 17:58:17.845377 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:17.845353 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" event={"ID":"b756e2ab-fa83-4a3a-bdfd-46cd754d87e2","Type":"ContainerStarted","Data":"ee5eea2bc07af4a1fc68d3ecbf81c6c73e5c98e8862596d52ff1b2ee59f8ce22"} Apr 23 17:58:21.858486 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:21.858441 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" event={"ID":"b756e2ab-fa83-4a3a-bdfd-46cd754d87e2","Type":"ContainerStarted","Data":"607c83ed27b407ce68bad6e52a2216b0145323aec104f19a46bda4ccd47b4613"} Apr 23 17:58:21.858872 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:21.858571 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" Apr 23 17:58:21.882762 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:21.882704 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" podStartSLOduration=1.66629356 podStartE2EDuration="4.882690793s" podCreationTimestamp="2026-04-23 17:58:17 +0000 UTC" firstStartedPulling="2026-04-23 17:58:17.832012602 +0000 UTC m=+361.619751840" lastFinishedPulling="2026-04-23 17:58:21.048409819 +0000 UTC m=+364.836149073" observedRunningTime="2026-04-23 17:58:21.881553306 +0000 UTC m=+365.669292563" watchObservedRunningTime="2026-04-23 17:58:21.882690793 +0000 UTC m=+365.670430076" Apr 23 17:58:22.171870 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.171794 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-8vgvr"] Apr 23 17:58:22.175130 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.175108 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:58:22.177737 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.177718 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 23 17:58:22.177886 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.177808 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 23 17:58:22.177985 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.177899 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-ct5nf\"" Apr 23 17:58:22.184872 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.184852 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-8vgvr"] Apr 23 17:58:22.333103 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.333070 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfbm\" (UniqueName: \"kubernetes.io/projected/5759a8f8-6d66-490e-acd9-b9f4bc5675ab-kube-api-access-spfbm\") pod \"keda-admission-cf49989db-8vgvr\" (UID: \"5759a8f8-6d66-490e-acd9-b9f4bc5675ab\") " pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:58:22.333272 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.333128 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/5759a8f8-6d66-490e-acd9-b9f4bc5675ab-certificates\") pod \"keda-admission-cf49989db-8vgvr\" (UID: \"5759a8f8-6d66-490e-acd9-b9f4bc5675ab\") " pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:58:22.433819 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.433729 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/5759a8f8-6d66-490e-acd9-b9f4bc5675ab-certificates\") pod \"keda-admission-cf49989db-8vgvr\" (UID: \"5759a8f8-6d66-490e-acd9-b9f4bc5675ab\") " pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:58:22.433819 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.433806 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-spfbm\" (UniqueName: \"kubernetes.io/projected/5759a8f8-6d66-490e-acd9-b9f4bc5675ab-kube-api-access-spfbm\") pod \"keda-admission-cf49989db-8vgvr\" (UID: \"5759a8f8-6d66-490e-acd9-b9f4bc5675ab\") " pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:58:22.434071 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:58:22.433880 2565 projected.go:264] Couldn't get secret openshift-keda/keda-admission-webhooks-certs: secret "keda-admission-webhooks-certs" not found Apr 23 17:58:22.434071 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:58:22.433907 2565 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-admission-cf49989db-8vgvr: secret "keda-admission-webhooks-certs" not found Apr 23 17:58:22.434071 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:58:22.433975 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5759a8f8-6d66-490e-acd9-b9f4bc5675ab-certificates podName:5759a8f8-6d66-490e-acd9-b9f4bc5675ab nodeName:}" failed. No retries permitted until 2026-04-23 17:58:22.933945805 +0000 UTC m=+366.721685046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/5759a8f8-6d66-490e-acd9-b9f4bc5675ab-certificates") pod "keda-admission-cf49989db-8vgvr" (UID: "5759a8f8-6d66-490e-acd9-b9f4bc5675ab") : secret "keda-admission-webhooks-certs" not found Apr 23 17:58:22.456290 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.456264 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-spfbm\" (UniqueName: \"kubernetes.io/projected/5759a8f8-6d66-490e-acd9-b9f4bc5675ab-kube-api-access-spfbm\") pod \"keda-admission-cf49989db-8vgvr\" (UID: \"5759a8f8-6d66-490e-acd9-b9f4bc5675ab\") " pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:58:22.937272 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.937234 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/5759a8f8-6d66-490e-acd9-b9f4bc5675ab-certificates\") pod \"keda-admission-cf49989db-8vgvr\" (UID: \"5759a8f8-6d66-490e-acd9-b9f4bc5675ab\") " pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:58:22.940045 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:22.940015 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/5759a8f8-6d66-490e-acd9-b9f4bc5675ab-certificates\") pod \"keda-admission-cf49989db-8vgvr\" (UID: \"5759a8f8-6d66-490e-acd9-b9f4bc5675ab\") " pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:58:23.085887 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:23.085850 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:58:23.207069 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:23.207039 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-8vgvr"] Apr 23 17:58:23.209364 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:58:23.209333 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5759a8f8_6d66_490e_acd9_b9f4bc5675ab.slice/crio-a736c5a20c08867c842900872f737bc301cfffefda889920919537eb1a48f4ea WatchSource:0}: Error finding container a736c5a20c08867c842900872f737bc301cfffefda889920919537eb1a48f4ea: Status 404 returned error can't find the container with id a736c5a20c08867c842900872f737bc301cfffefda889920919537eb1a48f4ea Apr 23 17:58:23.865916 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:23.865878 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-8vgvr" event={"ID":"5759a8f8-6d66-490e-acd9-b9f4bc5675ab","Type":"ContainerStarted","Data":"a736c5a20c08867c842900872f737bc301cfffefda889920919537eb1a48f4ea"} Apr 23 17:58:24.869617 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:24.869583 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-8vgvr" event={"ID":"5759a8f8-6d66-490e-acd9-b9f4bc5675ab","Type":"ContainerStarted","Data":"1d0f8b8eda237800827c3cbf0c4ce7da27b7eedf741779c06ee889077fcd0a52"} Apr 23 17:58:24.870048 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:24.869689 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:58:24.887266 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:24.887221 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-8vgvr" podStartSLOduration=1.718168683 podStartE2EDuration="2.887203159s" podCreationTimestamp="2026-04-23 17:58:22 +0000 UTC" firstStartedPulling="2026-04-23 17:58:23.210740319 +0000 UTC m=+366.998479561" lastFinishedPulling="2026-04-23 17:58:24.379774788 +0000 UTC m=+368.167514037" observedRunningTime="2026-04-23 17:58:24.886671304 +0000 UTC m=+368.674410564" watchObservedRunningTime="2026-04-23 17:58:24.887203159 +0000 UTC m=+368.674942422" Apr 23 17:58:42.863777 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:42.863745 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qfjxk" Apr 23 17:58:45.875315 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:58:45.875279 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-8vgvr" Apr 23 17:59:30.962681 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:30.962594 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-lgr7w"] Apr 23 17:59:30.965817 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:30.965800 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 17:59:30.968671 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:30.968648 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 23 17:59:30.968785 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:30.968648 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 23 17:59:30.969208 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:30.969192 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 23 17:59:30.969791 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:30.969777 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-7xmp6\"" Apr 23 17:59:30.977170 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:30.977149 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-lgr7w"] Apr 23 17:59:30.998942 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:30.998911 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf28899e-273f-4b30-86fc-e83a8d14fa05-cert\") pod \"kserve-controller-manager-6fc5d867c5-lgr7w\" (UID: \"bf28899e-273f-4b30-86fc-e83a8d14fa05\") " pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 17:59:30.999125 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:30.998945 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktkp2\" (UniqueName: \"kubernetes.io/projected/bf28899e-273f-4b30-86fc-e83a8d14fa05-kube-api-access-ktkp2\") pod \"kserve-controller-manager-6fc5d867c5-lgr7w\" (UID: \"bf28899e-273f-4b30-86fc-e83a8d14fa05\") " pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 17:59:31.100294 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:31.100251 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf28899e-273f-4b30-86fc-e83a8d14fa05-cert\") pod \"kserve-controller-manager-6fc5d867c5-lgr7w\" (UID: \"bf28899e-273f-4b30-86fc-e83a8d14fa05\") " pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 17:59:31.100477 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:31.100300 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ktkp2\" (UniqueName: \"kubernetes.io/projected/bf28899e-273f-4b30-86fc-e83a8d14fa05-kube-api-access-ktkp2\") pod \"kserve-controller-manager-6fc5d867c5-lgr7w\" (UID: \"bf28899e-273f-4b30-86fc-e83a8d14fa05\") " pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 17:59:31.100477 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:59:31.100404 2565 secret.go:189] Couldn't get secret kserve/kserve-webhook-server-cert: secret "kserve-webhook-server-cert" not found Apr 23 17:59:31.100606 ip-10-0-131-177 kubenswrapper[2565]: E0423 17:59:31.100485 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf28899e-273f-4b30-86fc-e83a8d14fa05-cert podName:bf28899e-273f-4b30-86fc-e83a8d14fa05 nodeName:}" failed. No retries permitted until 2026-04-23 17:59:31.600465009 +0000 UTC m=+435.388204269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf28899e-273f-4b30-86fc-e83a8d14fa05-cert") pod "kserve-controller-manager-6fc5d867c5-lgr7w" (UID: "bf28899e-273f-4b30-86fc-e83a8d14fa05") : secret "kserve-webhook-server-cert" not found Apr 23 17:59:31.110098 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:31.110069 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktkp2\" (UniqueName: \"kubernetes.io/projected/bf28899e-273f-4b30-86fc-e83a8d14fa05-kube-api-access-ktkp2\") pod \"kserve-controller-manager-6fc5d867c5-lgr7w\" (UID: \"bf28899e-273f-4b30-86fc-e83a8d14fa05\") " pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 17:59:31.605583 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:31.605540 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf28899e-273f-4b30-86fc-e83a8d14fa05-cert\") pod \"kserve-controller-manager-6fc5d867c5-lgr7w\" (UID: \"bf28899e-273f-4b30-86fc-e83a8d14fa05\") " pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 17:59:31.608070 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:31.608036 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf28899e-273f-4b30-86fc-e83a8d14fa05-cert\") pod \"kserve-controller-manager-6fc5d867c5-lgr7w\" (UID: \"bf28899e-273f-4b30-86fc-e83a8d14fa05\") " pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 17:59:31.876052 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:31.875923 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 17:59:31.996523 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:31.996490 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-lgr7w"] Apr 23 17:59:31.999661 ip-10-0-131-177 kubenswrapper[2565]: W0423 17:59:31.999630 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf28899e_273f_4b30_86fc_e83a8d14fa05.slice/crio-15620d3383ee803dd69e25bc52e6499dedcbc67a0c5223d4a4eab439e390dfb9 WatchSource:0}: Error finding container 15620d3383ee803dd69e25bc52e6499dedcbc67a0c5223d4a4eab439e390dfb9: Status 404 returned error can't find the container with id 15620d3383ee803dd69e25bc52e6499dedcbc67a0c5223d4a4eab439e390dfb9 Apr 23 17:59:32.061602 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:32.061565 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" event={"ID":"bf28899e-273f-4b30-86fc-e83a8d14fa05","Type":"ContainerStarted","Data":"15620d3383ee803dd69e25bc52e6499dedcbc67a0c5223d4a4eab439e390dfb9"} Apr 23 17:59:35.074039 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:35.074007 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" event={"ID":"bf28899e-273f-4b30-86fc-e83a8d14fa05","Type":"ContainerStarted","Data":"f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37"} Apr 23 17:59:35.074431 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:35.074121 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 17:59:35.094845 ip-10-0-131-177 kubenswrapper[2565]: I0423 17:59:35.094799 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" podStartSLOduration=2.69401736 podStartE2EDuration="5.09478667s" podCreationTimestamp="2026-04-23 17:59:30 +0000 UTC" firstStartedPulling="2026-04-23 17:59:32.000923018 +0000 UTC m=+435.788662268" lastFinishedPulling="2026-04-23 17:59:34.401692337 +0000 UTC m=+438.189431578" observedRunningTime="2026-04-23 17:59:35.092572888 +0000 UTC m=+438.880312148" watchObservedRunningTime="2026-04-23 17:59:35.09478667 +0000 UTC m=+438.882525976" Apr 23 18:00:05.893182 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:05.893150 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-lgr7w"] Apr 23 18:00:05.893717 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:05.893389 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" podUID="bf28899e-273f-4b30-86fc-e83a8d14fa05" containerName="manager" containerID="cri-o://f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37" gracePeriod=10 Apr 23 18:00:05.898251 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:05.898224 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 18:00:05.914722 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:05.914692 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-6spwc"] Apr 23 18:00:05.917744 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:05.917728 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" Apr 23 18:00:05.930071 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:05.930045 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-6spwc"] Apr 23 18:00:06.009798 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.009775 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/341bfc4a-0fe2-45c9-b978-b320668afd82-cert\") pod \"kserve-controller-manager-6fc5d867c5-6spwc\" (UID: \"341bfc4a-0fe2-45c9-b978-b320668afd82\") " pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" Apr 23 18:00:06.009856 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.009826 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cvwq\" (UniqueName: \"kubernetes.io/projected/341bfc4a-0fe2-45c9-b978-b320668afd82-kube-api-access-4cvwq\") pod \"kserve-controller-manager-6fc5d867c5-6spwc\" (UID: \"341bfc4a-0fe2-45c9-b978-b320668afd82\") " pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" Apr 23 18:00:06.110643 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.110611 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/341bfc4a-0fe2-45c9-b978-b320668afd82-cert\") pod \"kserve-controller-manager-6fc5d867c5-6spwc\" (UID: \"341bfc4a-0fe2-45c9-b978-b320668afd82\") " pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" Apr 23 18:00:06.110805 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.110679 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4cvwq\" (UniqueName: \"kubernetes.io/projected/341bfc4a-0fe2-45c9-b978-b320668afd82-kube-api-access-4cvwq\") pod \"kserve-controller-manager-6fc5d867c5-6spwc\" (UID: \"341bfc4a-0fe2-45c9-b978-b320668afd82\") " pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" Apr 23 18:00:06.113277 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.113252 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/341bfc4a-0fe2-45c9-b978-b320668afd82-cert\") pod \"kserve-controller-manager-6fc5d867c5-6spwc\" (UID: \"341bfc4a-0fe2-45c9-b978-b320668afd82\") " pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" Apr 23 18:00:06.119658 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.119635 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cvwq\" (UniqueName: \"kubernetes.io/projected/341bfc4a-0fe2-45c9-b978-b320668afd82-kube-api-access-4cvwq\") pod \"kserve-controller-manager-6fc5d867c5-6spwc\" (UID: \"341bfc4a-0fe2-45c9-b978-b320668afd82\") " pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" Apr 23 18:00:06.130382 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.130361 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 18:00:06.162090 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.162006 2565 generic.go:358] "Generic (PLEG): container finished" podID="bf28899e-273f-4b30-86fc-e83a8d14fa05" containerID="f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37" exitCode=0 Apr 23 18:00:06.162090 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.162074 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" event={"ID":"bf28899e-273f-4b30-86fc-e83a8d14fa05","Type":"ContainerDied","Data":"f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37"} Apr 23 18:00:06.162259 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.162114 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" event={"ID":"bf28899e-273f-4b30-86fc-e83a8d14fa05","Type":"ContainerDied","Data":"15620d3383ee803dd69e25bc52e6499dedcbc67a0c5223d4a4eab439e390dfb9"} Apr 23 18:00:06.162259 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.162129 2565 scope.go:117] "RemoveContainer" containerID="f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37" Apr 23 18:00:06.162259 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.162083 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-6fc5d867c5-lgr7w" Apr 23 18:00:06.170132 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.170107 2565 scope.go:117] "RemoveContainer" containerID="f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37" Apr 23 18:00:06.170450 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:00:06.170421 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37\": container with ID starting with f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37 not found: ID does not exist" containerID="f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37" Apr 23 18:00:06.170548 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.170461 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37"} err="failed to get container status \"f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37\": rpc error: code = NotFound desc = could not find container \"f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37\": container with ID starting with f1474c8e35bb0a9c527b8a512e04044aa542c8485fb4d90f3eba3d9fc97a0c37 not found: ID does not exist" Apr 23 18:00:06.274570 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.274527 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" Apr 23 18:00:06.311505 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.311472 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf28899e-273f-4b30-86fc-e83a8d14fa05-cert\") pod \"bf28899e-273f-4b30-86fc-e83a8d14fa05\" (UID: \"bf28899e-273f-4b30-86fc-e83a8d14fa05\") " Apr 23 18:00:06.311694 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.311522 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktkp2\" (UniqueName: \"kubernetes.io/projected/bf28899e-273f-4b30-86fc-e83a8d14fa05-kube-api-access-ktkp2\") pod \"bf28899e-273f-4b30-86fc-e83a8d14fa05\" (UID: \"bf28899e-273f-4b30-86fc-e83a8d14fa05\") " Apr 23 18:00:06.313747 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.313708 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf28899e-273f-4b30-86fc-e83a8d14fa05-cert" (OuterVolumeSpecName: "cert") pod "bf28899e-273f-4b30-86fc-e83a8d14fa05" (UID: "bf28899e-273f-4b30-86fc-e83a8d14fa05"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:00:06.313747 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.313729 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf28899e-273f-4b30-86fc-e83a8d14fa05-kube-api-access-ktkp2" (OuterVolumeSpecName: "kube-api-access-ktkp2") pod "bf28899e-273f-4b30-86fc-e83a8d14fa05" (UID: "bf28899e-273f-4b30-86fc-e83a8d14fa05"). InnerVolumeSpecName "kube-api-access-ktkp2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:00:06.393443 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.393408 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-6spwc"] Apr 23 18:00:06.397149 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:00:06.397123 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod341bfc4a_0fe2_45c9_b978_b320668afd82.slice/crio-69f4907a907813d1dffbe64c72b8e4f7502dc471c6efb3788cb3ddb01580733c WatchSource:0}: Error finding container 69f4907a907813d1dffbe64c72b8e4f7502dc471c6efb3788cb3ddb01580733c: Status 404 returned error can't find the container with id 69f4907a907813d1dffbe64c72b8e4f7502dc471c6efb3788cb3ddb01580733c Apr 23 18:00:06.412781 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.412727 2565 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf28899e-273f-4b30-86fc-e83a8d14fa05-cert\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:00:06.412781 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.412749 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ktkp2\" (UniqueName: \"kubernetes.io/projected/bf28899e-273f-4b30-86fc-e83a8d14fa05-kube-api-access-ktkp2\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:00:06.483684 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.483655 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-lgr7w"] Apr 23 18:00:06.485719 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.485695 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-lgr7w"] Apr 23 18:00:06.912531 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:06.912487 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf28899e-273f-4b30-86fc-e83a8d14fa05" path="/var/lib/kubelet/pods/bf28899e-273f-4b30-86fc-e83a8d14fa05/volumes" Apr 23 18:00:07.166347 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:07.166261 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" event={"ID":"341bfc4a-0fe2-45c9-b978-b320668afd82","Type":"ContainerStarted","Data":"6517d4ccedc43a82aa40b74a0a0ad573349b3bb0979fec3ca9bcddbdb83f1989"} Apr 23 18:00:07.166347 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:07.166295 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" event={"ID":"341bfc4a-0fe2-45c9-b978-b320668afd82","Type":"ContainerStarted","Data":"69f4907a907813d1dffbe64c72b8e4f7502dc471c6efb3788cb3ddb01580733c"} Apr 23 18:00:07.166540 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:07.166394 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" Apr 23 18:00:07.191495 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:07.191446 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" podStartSLOduration=1.8290534090000001 podStartE2EDuration="2.191427677s" podCreationTimestamp="2026-04-23 18:00:05 +0000 UTC" firstStartedPulling="2026-04-23 18:00:06.398424597 +0000 UTC m=+470.186163835" lastFinishedPulling="2026-04-23 18:00:06.760798861 +0000 UTC m=+470.548538103" observedRunningTime="2026-04-23 18:00:07.184674431 +0000 UTC m=+470.972413694" watchObservedRunningTime="2026-04-23 18:00:07.191427677 +0000 UTC m=+470.979166939" Apr 23 18:00:38.174091 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:38.174047 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-6fc5d867c5-6spwc" Apr 23 18:00:39.026154 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.026116 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/model-serving-api-86f7b4b499-mm6kd"] Apr 23 18:00:39.026638 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.026621 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bf28899e-273f-4b30-86fc-e83a8d14fa05" containerName="manager" Apr 23 18:00:39.026731 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.026642 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf28899e-273f-4b30-86fc-e83a8d14fa05" containerName="manager" Apr 23 18:00:39.026731 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.026713 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="bf28899e-273f-4b30-86fc-e83a8d14fa05" containerName="manager" Apr 23 18:00:39.029749 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.029726 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-mm6kd" Apr 23 18:00:39.033215 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.033188 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-dockercfg-8qdsz\"" Apr 23 18:00:39.033378 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.033188 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-tls\"" Apr 23 18:00:39.039172 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.038758 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-mm6kd"] Apr 23 18:00:39.070401 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.070359 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/87bc6443-61bb-45e7-8e05-7e0d04f3ba76-tls-certs\") pod \"model-serving-api-86f7b4b499-mm6kd\" (UID: \"87bc6443-61bb-45e7-8e05-7e0d04f3ba76\") " pod="kserve/model-serving-api-86f7b4b499-mm6kd" Apr 23 18:00:39.070569 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.070415 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spcj\" (UniqueName: \"kubernetes.io/projected/87bc6443-61bb-45e7-8e05-7e0d04f3ba76-kube-api-access-7spcj\") pod \"model-serving-api-86f7b4b499-mm6kd\" (UID: \"87bc6443-61bb-45e7-8e05-7e0d04f3ba76\") " pod="kserve/model-serving-api-86f7b4b499-mm6kd" Apr 23 18:00:39.171913 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.171873 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/87bc6443-61bb-45e7-8e05-7e0d04f3ba76-tls-certs\") pod \"model-serving-api-86f7b4b499-mm6kd\" (UID: \"87bc6443-61bb-45e7-8e05-7e0d04f3ba76\") " pod="kserve/model-serving-api-86f7b4b499-mm6kd" Apr 23 18:00:39.172113 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.171924 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7spcj\" (UniqueName: \"kubernetes.io/projected/87bc6443-61bb-45e7-8e05-7e0d04f3ba76-kube-api-access-7spcj\") pod \"model-serving-api-86f7b4b499-mm6kd\" (UID: \"87bc6443-61bb-45e7-8e05-7e0d04f3ba76\") " pod="kserve/model-serving-api-86f7b4b499-mm6kd" Apr 23 18:00:39.174323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.174304 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/87bc6443-61bb-45e7-8e05-7e0d04f3ba76-tls-certs\") pod \"model-serving-api-86f7b4b499-mm6kd\" (UID: \"87bc6443-61bb-45e7-8e05-7e0d04f3ba76\") " pod="kserve/model-serving-api-86f7b4b499-mm6kd" Apr 23 18:00:39.182698 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.182677 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7spcj\" (UniqueName: \"kubernetes.io/projected/87bc6443-61bb-45e7-8e05-7e0d04f3ba76-kube-api-access-7spcj\") pod \"model-serving-api-86f7b4b499-mm6kd\" (UID: \"87bc6443-61bb-45e7-8e05-7e0d04f3ba76\") " pod="kserve/model-serving-api-86f7b4b499-mm6kd" Apr 23 18:00:39.343273 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.343180 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-mm6kd" Apr 23 18:00:39.465833 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:39.465809 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-mm6kd"] Apr 23 18:00:39.468424 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:00:39.468391 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87bc6443_61bb_45e7_8e05_7e0d04f3ba76.slice/crio-0c998389ae49c4959fe60ebf76b5474f8aa487af6afc4cd02cada5899f4e15e8 WatchSource:0}: Error finding container 0c998389ae49c4959fe60ebf76b5474f8aa487af6afc4cd02cada5899f4e15e8: Status 404 returned error can't find the container with id 0c998389ae49c4959fe60ebf76b5474f8aa487af6afc4cd02cada5899f4e15e8 Apr 23 18:00:40.267733 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:40.267686 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-mm6kd" event={"ID":"87bc6443-61bb-45e7-8e05-7e0d04f3ba76","Type":"ContainerStarted","Data":"0c998389ae49c4959fe60ebf76b5474f8aa487af6afc4cd02cada5899f4e15e8"} Apr 23 18:00:41.272753 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:41.272710 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-mm6kd" event={"ID":"87bc6443-61bb-45e7-8e05-7e0d04f3ba76","Type":"ContainerStarted","Data":"e97be7ccc381563f2e3398eecfd934df3173465143593637f3022de1ee80f707"} Apr 23 18:00:41.273156 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:41.272786 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/model-serving-api-86f7b4b499-mm6kd" Apr 23 18:00:41.290606 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:41.290548 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/model-serving-api-86f7b4b499-mm6kd" podStartSLOduration=1.100438794 podStartE2EDuration="2.290532814s" podCreationTimestamp="2026-04-23 18:00:39 +0000 UTC" firstStartedPulling="2026-04-23 18:00:39.470266575 +0000 UTC m=+503.258005814" lastFinishedPulling="2026-04-23 18:00:40.660360596 +0000 UTC m=+504.448099834" observedRunningTime="2026-04-23 18:00:41.289845817 +0000 UTC m=+505.077585090" watchObservedRunningTime="2026-04-23 18:00:41.290532814 +0000 UTC m=+505.078272068" Apr 23 18:00:52.279904 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:00:52.279867 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/model-serving-api-86f7b4b499-mm6kd" Apr 23 18:01:15.744484 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.744400 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k"] Apr 23 18:01:15.747638 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.747615 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:15.750000 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.749971 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-e7d75-kube-rbac-proxy-sar-config\"" Apr 23 18:01:15.750541 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.750517 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"kube-root-ca.crt\"" Apr 23 18:01:15.751179 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.751153 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 23 18:01:15.751304 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.751153 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-jv9tx\"" Apr 23 18:01:15.751304 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.751184 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-e7d75-predictor-serving-cert\"" Apr 23 18:01:15.761509 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.761482 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k"] Apr 23 18:01:15.797387 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.797353 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5bwc\" (UniqueName: \"kubernetes.io/projected/77f89529-93b4-4590-bf68-4bc1e4d717bd-kube-api-access-z5bwc\") pod \"success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:15.797583 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.797408 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"success-200-isvc-e7d75-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/77f89529-93b4-4590-bf68-4bc1e4d717bd-success-200-isvc-e7d75-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:15.797583 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.797516 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77f89529-93b4-4590-bf68-4bc1e4d717bd-proxy-tls\") pod \"success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:15.898797 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.898765 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77f89529-93b4-4590-bf68-4bc1e4d717bd-proxy-tls\") pod \"success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:15.898998 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.898816 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z5bwc\" (UniqueName: \"kubernetes.io/projected/77f89529-93b4-4590-bf68-4bc1e4d717bd-kube-api-access-z5bwc\") pod \"success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:15.898998 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.898864 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"success-200-isvc-e7d75-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/77f89529-93b4-4590-bf68-4bc1e4d717bd-success-200-isvc-e7d75-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:15.899620 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.899596 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"success-200-isvc-e7d75-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/77f89529-93b4-4590-bf68-4bc1e4d717bd-success-200-isvc-e7d75-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:15.901295 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.901269 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77f89529-93b4-4590-bf68-4bc1e4d717bd-proxy-tls\") pod \"success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:15.908748 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:15.908691 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5bwc\" (UniqueName: \"kubernetes.io/projected/77f89529-93b4-4590-bf68-4bc1e4d717bd-kube-api-access-z5bwc\") pod \"success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:16.058605 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.058519 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:16.180904 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.180872 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k"] Apr 23 18:01:16.183527 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:01:16.183493 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77f89529_93b4_4590_bf68_4bc1e4d717bd.slice/crio-1a179ebfd26d6a036112d5569837de3d3595ffdb62b5cad91b3b2bbdb8c59b08 WatchSource:0}: Error finding container 1a179ebfd26d6a036112d5569837de3d3595ffdb62b5cad91b3b2bbdb8c59b08: Status 404 returned error can't find the container with id 1a179ebfd26d6a036112d5569837de3d3595ffdb62b5cad91b3b2bbdb8c59b08 Apr 23 18:01:16.381315 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.381222 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" event={"ID":"77f89529-93b4-4590-bf68-4bc1e4d717bd","Type":"ContainerStarted","Data":"1a179ebfd26d6a036112d5569837de3d3595ffdb62b5cad91b3b2bbdb8c59b08"} Apr 23 18:01:16.729577 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.729439 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92"] Apr 23 18:01:16.734330 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.734299 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.737762 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.737266 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-sklearn-graph-2-predictor-serving-cert\"" Apr 23 18:01:16.737762 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.737592 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\"" Apr 23 18:01:16.744302 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.744265 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92"] Apr 23 18:01:16.805184 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.805098 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h86px\" (UniqueName: \"kubernetes.io/projected/96fb8faf-c4f3-4431-a460-c27f10b43e34-kube-api-access-h86px\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.805184 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.805145 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/96fb8faf-c4f3-4431-a460-c27f10b43e34-kserve-provision-location\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.805804 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.805236 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96fb8faf-c4f3-4431-a460-c27f10b43e34-proxy-tls\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.805804 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.805296 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/96fb8faf-c4f3-4431-a460-c27f10b43e34-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.906772 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.906730 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h86px\" (UniqueName: \"kubernetes.io/projected/96fb8faf-c4f3-4431-a460-c27f10b43e34-kube-api-access-h86px\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.907036 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.906784 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/96fb8faf-c4f3-4431-a460-c27f10b43e34-kserve-provision-location\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.907036 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.906839 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96fb8faf-c4f3-4431-a460-c27f10b43e34-proxy-tls\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.907036 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.906866 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/96fb8faf-c4f3-4431-a460-c27f10b43e34-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.907735 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.907621 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/96fb8faf-c4f3-4431-a460-c27f10b43e34-kserve-provision-location\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.909745 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.909720 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\"" Apr 23 18:01:16.909917 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.909892 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-sklearn-graph-2-predictor-serving-cert\"" Apr 23 18:01:16.918650 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.918617 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/96fb8faf-c4f3-4431-a460-c27f10b43e34-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.918787 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.918634 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h86px\" (UniqueName: \"kubernetes.io/projected/96fb8faf-c4f3-4431-a460-c27f10b43e34-kube-api-access-h86px\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:16.921349 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:16.921299 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96fb8faf-c4f3-4431-a460-c27f10b43e34-proxy-tls\") pod \"isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:17.058314 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:17.058273 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:17.272704 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:17.272666 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92"] Apr 23 18:01:17.274519 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:01:17.274479 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96fb8faf_c4f3_4431_a460_c27f10b43e34.slice/crio-6957f761a92ff725667ae14e7a3ae40a660c86fd58985a65a7f1e6f2f4d62634 WatchSource:0}: Error finding container 6957f761a92ff725667ae14e7a3ae40a660c86fd58985a65a7f1e6f2f4d62634: Status 404 returned error can't find the container with id 6957f761a92ff725667ae14e7a3ae40a660c86fd58985a65a7f1e6f2f4d62634 Apr 23 18:01:17.391595 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:17.388851 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" event={"ID":"96fb8faf-c4f3-4431-a460-c27f10b43e34","Type":"ContainerStarted","Data":"6957f761a92ff725667ae14e7a3ae40a660c86fd58985a65a7f1e6f2f4d62634"} Apr 23 18:01:29.443257 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:29.443154 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" event={"ID":"77f89529-93b4-4590-bf68-4bc1e4d717bd","Type":"ContainerStarted","Data":"6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9"} Apr 23 18:01:29.444585 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:29.444554 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" event={"ID":"96fb8faf-c4f3-4431-a460-c27f10b43e34","Type":"ContainerStarted","Data":"4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56"} Apr 23 18:01:32.456611 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:32.456567 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" event={"ID":"77f89529-93b4-4590-bf68-4bc1e4d717bd","Type":"ContainerStarted","Data":"e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d"} Apr 23 18:01:32.457101 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:32.456737 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:32.476017 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:32.475944 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" podStartSLOduration=2.081151139 podStartE2EDuration="17.475927814s" podCreationTimestamp="2026-04-23 18:01:15 +0000 UTC" firstStartedPulling="2026-04-23 18:01:16.185369936 +0000 UTC m=+539.973109174" lastFinishedPulling="2026-04-23 18:01:31.5801466 +0000 UTC m=+555.367885849" observedRunningTime="2026-04-23 18:01:32.473404477 +0000 UTC m=+556.261143737" watchObservedRunningTime="2026-04-23 18:01:32.475927814 +0000 UTC m=+556.263667074" Apr 23 18:01:33.461469 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:33.461431 2565 generic.go:358] "Generic (PLEG): container finished" podID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerID="4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56" exitCode=0 Apr 23 18:01:33.461867 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:33.461505 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" event={"ID":"96fb8faf-c4f3-4431-a460-c27f10b43e34","Type":"ContainerDied","Data":"4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56"} Apr 23 18:01:33.462009 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:33.461991 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:33.463538 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:33.463498 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 23 18:01:34.465733 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:34.465683 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 23 18:01:39.471166 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:39.471130 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:01:39.471712 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:39.471678 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 23 18:01:42.494154 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:42.494119 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" event={"ID":"96fb8faf-c4f3-4431-a460-c27f10b43e34","Type":"ContainerStarted","Data":"b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7"} Apr 23 18:01:42.494154 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:42.494162 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" event={"ID":"96fb8faf-c4f3-4431-a460-c27f10b43e34","Type":"ContainerStarted","Data":"a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f"} Apr 23 18:01:42.494583 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:42.494365 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:42.514291 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:42.514225 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podStartSLOduration=2.000890339 podStartE2EDuration="26.514204935s" podCreationTimestamp="2026-04-23 18:01:16 +0000 UTC" firstStartedPulling="2026-04-23 18:01:17.278237261 +0000 UTC m=+541.065976514" lastFinishedPulling="2026-04-23 18:01:41.791551868 +0000 UTC m=+565.579291110" observedRunningTime="2026-04-23 18:01:42.513120936 +0000 UTC m=+566.300860209" watchObservedRunningTime="2026-04-23 18:01:42.514204935 +0000 UTC m=+566.301944198" Apr 23 18:01:43.497727 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:43.497692 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:43.499117 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:43.499086 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 23 18:01:44.500702 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:44.500660 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 23 18:01:49.471759 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:49.471721 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 23 18:01:49.506544 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:49.506507 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:01:49.507140 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:49.507113 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 23 18:01:59.472478 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:59.472437 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 23 18:01:59.507114 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:01:59.507074 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 23 18:02:09.472241 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:09.472201 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 23 18:02:09.507661 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:09.507621 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 23 18:02:16.808754 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:16.808723 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:02:16.809262 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:16.809180 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:02:19.472107 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:19.472073 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:02:19.507813 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:19.507769 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 23 18:02:29.507867 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:29.507774 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 23 18:02:39.507495 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:39.507456 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 23 18:02:45.828738 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:45.828687 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k"] Apr 23 18:02:45.829225 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:45.829105 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kserve-container" containerID="cri-o://6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9" gracePeriod=30 Apr 23 18:02:45.829225 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:45.829176 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kube-rbac-proxy" containerID="cri-o://e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d" gracePeriod=30 Apr 23 18:02:45.912439 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:45.912408 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w"] Apr 23 18:02:45.921895 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:45.921866 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:45.924947 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:45.924920 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-3d086-kube-rbac-proxy-sar-config\"" Apr 23 18:02:45.925107 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:45.924920 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-3d086-predictor-serving-cert\"" Apr 23 18:02:45.925378 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:45.925332 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w"] Apr 23 18:02:46.002153 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.002127 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q94c9\" (UniqueName: \"kubernetes.io/projected/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-kube-api-access-q94c9\") pod \"success-200-isvc-3d086-predictor-67ccc5b954-l2q2w\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.002297 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.002173 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"success-200-isvc-3d086-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-success-200-isvc-3d086-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-3d086-predictor-67ccc5b954-l2q2w\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.002297 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.002257 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-proxy-tls\") pod \"success-200-isvc-3d086-predictor-67ccc5b954-l2q2w\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.102913 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.102824 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q94c9\" (UniqueName: \"kubernetes.io/projected/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-kube-api-access-q94c9\") pod \"success-200-isvc-3d086-predictor-67ccc5b954-l2q2w\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.102913 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.102874 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"success-200-isvc-3d086-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-success-200-isvc-3d086-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-3d086-predictor-67ccc5b954-l2q2w\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.103168 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.102997 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-proxy-tls\") pod \"success-200-isvc-3d086-predictor-67ccc5b954-l2q2w\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.103517 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.103492 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"success-200-isvc-3d086-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-success-200-isvc-3d086-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-3d086-predictor-67ccc5b954-l2q2w\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.105558 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.105535 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-proxy-tls\") pod \"success-200-isvc-3d086-predictor-67ccc5b954-l2q2w\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.111838 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.111814 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q94c9\" (UniqueName: \"kubernetes.io/projected/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-kube-api-access-q94c9\") pod \"success-200-isvc-3d086-predictor-67ccc5b954-l2q2w\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.235718 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.235687 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.355872 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.355848 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w"] Apr 23 18:02:46.357613 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:02:46.357582 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a9cc4a6_864f_450c_a9b2_3a626d274d3b.slice/crio-4fe5fbcf3937d1e335be48aec9cae64408d8f82cd992ace73d9e4764b79cc0aa WatchSource:0}: Error finding container 4fe5fbcf3937d1e335be48aec9cae64408d8f82cd992ace73d9e4764b79cc0aa: Status 404 returned error can't find the container with id 4fe5fbcf3937d1e335be48aec9cae64408d8f82cd992ace73d9e4764b79cc0aa Apr 23 18:02:46.359331 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.359314 2565 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:02:46.697453 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.697335 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" event={"ID":"0a9cc4a6-864f-450c-a9b2-3a626d274d3b","Type":"ContainerStarted","Data":"f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4"} Apr 23 18:02:46.697453 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.697397 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" event={"ID":"0a9cc4a6-864f-450c-a9b2-3a626d274d3b","Type":"ContainerStarted","Data":"7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc"} Apr 23 18:02:46.697453 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.697414 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" event={"ID":"0a9cc4a6-864f-450c-a9b2-3a626d274d3b","Type":"ContainerStarted","Data":"4fe5fbcf3937d1e335be48aec9cae64408d8f82cd992ace73d9e4764b79cc0aa"} Apr 23 18:02:46.697715 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.697460 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:46.698865 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.698843 2565 generic.go:358] "Generic (PLEG): container finished" podID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerID="e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d" exitCode=2 Apr 23 18:02:46.698991 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.698902 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" event={"ID":"77f89529-93b4-4590-bf68-4bc1e4d717bd","Type":"ContainerDied","Data":"e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d"} Apr 23 18:02:46.717083 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:46.717038 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podStartSLOduration=1.7170275419999999 podStartE2EDuration="1.717027542s" podCreationTimestamp="2026-04-23 18:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:02:46.71503546 +0000 UTC m=+630.502774722" watchObservedRunningTime="2026-04-23 18:02:46.717027542 +0000 UTC m=+630.504766835" Apr 23 18:02:47.701760 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:47.701723 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:47.703136 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:47.703101 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.24:8080: connect: connection refused" Apr 23 18:02:48.704852 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:48.704812 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.24:8080: connect: connection refused" Apr 23 18:02:49.072476 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.072448 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:02:49.122150 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.122115 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5bwc\" (UniqueName: \"kubernetes.io/projected/77f89529-93b4-4590-bf68-4bc1e4d717bd-kube-api-access-z5bwc\") pod \"77f89529-93b4-4590-bf68-4bc1e4d717bd\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " Apr 23 18:02:49.122315 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.122173 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77f89529-93b4-4590-bf68-4bc1e4d717bd-proxy-tls\") pod \"77f89529-93b4-4590-bf68-4bc1e4d717bd\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " Apr 23 18:02:49.122315 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.122257 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"success-200-isvc-e7d75-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/77f89529-93b4-4590-bf68-4bc1e4d717bd-success-200-isvc-e7d75-kube-rbac-proxy-sar-config\") pod \"77f89529-93b4-4590-bf68-4bc1e4d717bd\" (UID: \"77f89529-93b4-4590-bf68-4bc1e4d717bd\") " Apr 23 18:02:49.122634 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.122604 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77f89529-93b4-4590-bf68-4bc1e4d717bd-success-200-isvc-e7d75-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "success-200-isvc-e7d75-kube-rbac-proxy-sar-config") pod "77f89529-93b4-4590-bf68-4bc1e4d717bd" (UID: "77f89529-93b4-4590-bf68-4bc1e4d717bd"). InnerVolumeSpecName "success-200-isvc-e7d75-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:02:49.124383 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.124359 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f89529-93b4-4590-bf68-4bc1e4d717bd-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "77f89529-93b4-4590-bf68-4bc1e4d717bd" (UID: "77f89529-93b4-4590-bf68-4bc1e4d717bd"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:02:49.124447 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.124409 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f89529-93b4-4590-bf68-4bc1e4d717bd-kube-api-access-z5bwc" (OuterVolumeSpecName: "kube-api-access-z5bwc") pod "77f89529-93b4-4590-bf68-4bc1e4d717bd" (UID: "77f89529-93b4-4590-bf68-4bc1e4d717bd"). InnerVolumeSpecName "kube-api-access-z5bwc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:02:49.223310 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.223276 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5bwc\" (UniqueName: \"kubernetes.io/projected/77f89529-93b4-4590-bf68-4bc1e4d717bd-kube-api-access-z5bwc\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:02:49.223310 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.223306 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77f89529-93b4-4590-bf68-4bc1e4d717bd-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:02:49.223310 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.223317 2565 reconciler_common.go:299] "Volume detached for volume \"success-200-isvc-e7d75-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/77f89529-93b4-4590-bf68-4bc1e4d717bd-success-200-isvc-e7d75-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:02:49.508668 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.508643 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:02:49.714084 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.714049 2565 generic.go:358] "Generic (PLEG): container finished" podID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerID="6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9" exitCode=0 Apr 23 18:02:49.714574 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.714129 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" event={"ID":"77f89529-93b4-4590-bf68-4bc1e4d717bd","Type":"ContainerDied","Data":"6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9"} Apr 23 18:02:49.714574 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.714153 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" Apr 23 18:02:49.714574 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.714168 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k" event={"ID":"77f89529-93b4-4590-bf68-4bc1e4d717bd","Type":"ContainerDied","Data":"1a179ebfd26d6a036112d5569837de3d3595ffdb62b5cad91b3b2bbdb8c59b08"} Apr 23 18:02:49.714574 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.714189 2565 scope.go:117] "RemoveContainer" containerID="e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d" Apr 23 18:02:49.722239 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.722135 2565 scope.go:117] "RemoveContainer" containerID="6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9" Apr 23 18:02:49.731614 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.731596 2565 scope.go:117] "RemoveContainer" containerID="e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d" Apr 23 18:02:49.731851 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:02:49.731830 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d\": container with ID starting with e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d not found: ID does not exist" containerID="e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d" Apr 23 18:02:49.731897 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.731859 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d"} err="failed to get container status \"e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d\": rpc error: code = NotFound desc = could not find container \"e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d\": container with ID starting with e408207da5620e9ba867e2815723d7171472dfcf2e58286334c65213b34b519d not found: ID does not exist" Apr 23 18:02:49.731897 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.731876 2565 scope.go:117] "RemoveContainer" containerID="6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9" Apr 23 18:02:49.732215 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:02:49.732195 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9\": container with ID starting with 6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9 not found: ID does not exist" containerID="6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9" Apr 23 18:02:49.732289 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.732220 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9"} err="failed to get container status \"6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9\": rpc error: code = NotFound desc = could not find container \"6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9\": container with ID starting with 6e340c623d79e12fec08cef61c68efd341eacbb164207229bb95c0ced4c90ff9 not found: ID does not exist" Apr 23 18:02:49.736867 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.736848 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k"] Apr 23 18:02:49.740312 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:49.740291 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e7d75-predictor-7dcb967d8d-2js4k"] Apr 23 18:02:50.911365 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:50.911327 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" path="/var/lib/kubelet/pods/77f89529-93b4-4590-bf68-4bc1e4d717bd/volumes" Apr 23 18:02:53.709703 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:53.709676 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:02:53.710251 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:02:53.710226 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.24:8080: connect: connection refused" Apr 23 18:03:03.710822 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:03.710781 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.24:8080: connect: connection refused" Apr 23 18:03:13.710215 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:13.710169 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.24:8080: connect: connection refused" Apr 23 18:03:23.710196 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:23.710157 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.24:8080: connect: connection refused" Apr 23 18:03:25.801545 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.801507 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92"] Apr 23 18:03:25.802668 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.802609 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" containerID="cri-o://a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f" gracePeriod=30 Apr 23 18:03:25.803785 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.803193 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kube-rbac-proxy" containerID="cri-o://b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7" gracePeriod=30 Apr 23 18:03:25.852109 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.852070 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls"] Apr 23 18:03:25.852564 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.852545 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kserve-container" Apr 23 18:03:25.852564 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.852565 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kserve-container" Apr 23 18:03:25.852747 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.852587 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kube-rbac-proxy" Apr 23 18:03:25.852747 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.852596 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kube-rbac-proxy" Apr 23 18:03:25.852747 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.852673 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kserve-container" Apr 23 18:03:25.852747 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.852689 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="77f89529-93b4-4590-bf68-4bc1e4d717bd" containerName="kube-rbac-proxy" Apr 23 18:03:25.857357 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.857336 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:25.860584 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.860535 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-0065f-predictor-serving-cert\"" Apr 23 18:03:25.860721 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.860589 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-0065f-kube-rbac-proxy-sar-config\"" Apr 23 18:03:25.868688 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.868664 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls"] Apr 23 18:03:25.937984 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.937924 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"success-200-isvc-0065f-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-success-200-isvc-0065f-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-0065f-predictor-76669b4f4-cd5ls\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:25.938185 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.937990 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmjj\" (UniqueName: \"kubernetes.io/projected/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-kube-api-access-zbmjj\") pod \"success-200-isvc-0065f-predictor-76669b4f4-cd5ls\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:25.938185 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:25.938035 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-proxy-tls\") pod \"success-200-isvc-0065f-predictor-76669b4f4-cd5ls\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:26.039182 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.039142 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-proxy-tls\") pod \"success-200-isvc-0065f-predictor-76669b4f4-cd5ls\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:26.039351 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.039261 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"success-200-isvc-0065f-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-success-200-isvc-0065f-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-0065f-predictor-76669b4f4-cd5ls\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:26.039351 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.039290 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbmjj\" (UniqueName: \"kubernetes.io/projected/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-kube-api-access-zbmjj\") pod \"success-200-isvc-0065f-predictor-76669b4f4-cd5ls\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:26.039351 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:03:26.039314 2565 secret.go:189] Couldn't get secret kserve-ci-e2e-test/success-200-isvc-0065f-predictor-serving-cert: secret "success-200-isvc-0065f-predictor-serving-cert" not found Apr 23 18:03:26.039495 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:03:26.039388 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-proxy-tls podName:59ee2c69-d1b8-4c6f-a69b-20e2cf2de714 nodeName:}" failed. No retries permitted until 2026-04-23 18:03:26.53936986 +0000 UTC m=+670.327109103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-proxy-tls") pod "success-200-isvc-0065f-predictor-76669b4f4-cd5ls" (UID: "59ee2c69-d1b8-4c6f-a69b-20e2cf2de714") : secret "success-200-isvc-0065f-predictor-serving-cert" not found Apr 23 18:03:26.040017 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.039996 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"success-200-isvc-0065f-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-success-200-isvc-0065f-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-0065f-predictor-76669b4f4-cd5ls\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:26.048418 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.048380 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbmjj\" (UniqueName: \"kubernetes.io/projected/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-kube-api-access-zbmjj\") pod \"success-200-isvc-0065f-predictor-76669b4f4-cd5ls\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:26.544107 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.544063 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-proxy-tls\") pod \"success-200-isvc-0065f-predictor-76669b4f4-cd5ls\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:26.546499 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.546457 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-proxy-tls\") pod \"success-200-isvc-0065f-predictor-76669b4f4-cd5ls\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:26.768171 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.768130 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:26.832407 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.832308 2565 generic.go:358] "Generic (PLEG): container finished" podID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerID="b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7" exitCode=2 Apr 23 18:03:26.832407 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.832360 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" event={"ID":"96fb8faf-c4f3-4431-a460-c27f10b43e34","Type":"ContainerDied","Data":"b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7"} Apr 23 18:03:26.893259 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:26.893235 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls"] Apr 23 18:03:26.895411 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:03:26.895379 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59ee2c69_d1b8_4c6f_a69b_20e2cf2de714.slice/crio-b1ec26b8e87fd109d7c61ff0741f084dff5479f9a9838fc57f5c16a629ea7193 WatchSource:0}: Error finding container b1ec26b8e87fd109d7c61ff0741f084dff5479f9a9838fc57f5c16a629ea7193: Status 404 returned error can't find the container with id b1ec26b8e87fd109d7c61ff0741f084dff5479f9a9838fc57f5c16a629ea7193 Apr 23 18:03:27.838025 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:27.837985 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" event={"ID":"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714","Type":"ContainerStarted","Data":"aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70"} Apr 23 18:03:27.838025 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:27.838029 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" event={"ID":"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714","Type":"ContainerStarted","Data":"185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8"} Apr 23 18:03:27.838559 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:27.838041 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" event={"ID":"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714","Type":"ContainerStarted","Data":"b1ec26b8e87fd109d7c61ff0741f084dff5479f9a9838fc57f5c16a629ea7193"} Apr 23 18:03:27.838559 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:27.838155 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:27.856540 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:27.856491 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" podStartSLOduration=2.856476821 podStartE2EDuration="2.856476821s" podCreationTimestamp="2026-04-23 18:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:03:27.854647675 +0000 UTC m=+671.642386935" watchObservedRunningTime="2026-04-23 18:03:27.856476821 +0000 UTC m=+671.644216081" Apr 23 18:03:28.841683 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:28.841646 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:28.843003 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:28.842972 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 23 18:03:29.501135 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:29.501092 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.23:8643/healthz\": dial tcp 10.134.0.23:8643: connect: connection refused" Apr 23 18:03:29.508014 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:29.507983 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 23 18:03:29.845106 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:29.845023 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 23 18:03:30.444572 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.444550 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:03:30.580993 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.580921 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/96fb8faf-c4f3-4431-a460-c27f10b43e34-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\") pod \"96fb8faf-c4f3-4431-a460-c27f10b43e34\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " Apr 23 18:03:30.580993 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.581003 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96fb8faf-c4f3-4431-a460-c27f10b43e34-proxy-tls\") pod \"96fb8faf-c4f3-4431-a460-c27f10b43e34\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " Apr 23 18:03:30.581264 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.581060 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h86px\" (UniqueName: \"kubernetes.io/projected/96fb8faf-c4f3-4431-a460-c27f10b43e34-kube-api-access-h86px\") pod \"96fb8faf-c4f3-4431-a460-c27f10b43e34\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " Apr 23 18:03:30.581264 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.581092 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/96fb8faf-c4f3-4431-a460-c27f10b43e34-kserve-provision-location\") pod \"96fb8faf-c4f3-4431-a460-c27f10b43e34\" (UID: \"96fb8faf-c4f3-4431-a460-c27f10b43e34\") " Apr 23 18:03:30.581425 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.581395 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96fb8faf-c4f3-4431-a460-c27f10b43e34-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "isvc-sklearn-graph-2-kube-rbac-proxy-sar-config") pod "96fb8faf-c4f3-4431-a460-c27f10b43e34" (UID: "96fb8faf-c4f3-4431-a460-c27f10b43e34"). InnerVolumeSpecName "isvc-sklearn-graph-2-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:03:30.581498 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.581476 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96fb8faf-c4f3-4431-a460-c27f10b43e34-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "96fb8faf-c4f3-4431-a460-c27f10b43e34" (UID: "96fb8faf-c4f3-4431-a460-c27f10b43e34"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:03:30.583224 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.583197 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fb8faf-c4f3-4431-a460-c27f10b43e34-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "96fb8faf-c4f3-4431-a460-c27f10b43e34" (UID: "96fb8faf-c4f3-4431-a460-c27f10b43e34"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:03:30.583298 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.583251 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96fb8faf-c4f3-4431-a460-c27f10b43e34-kube-api-access-h86px" (OuterVolumeSpecName: "kube-api-access-h86px") pod "96fb8faf-c4f3-4431-a460-c27f10b43e34" (UID: "96fb8faf-c4f3-4431-a460-c27f10b43e34"). InnerVolumeSpecName "kube-api-access-h86px". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:03:30.682507 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.682466 2565 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/96fb8faf-c4f3-4431-a460-c27f10b43e34-kserve-provision-location\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:03:30.682507 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.682504 2565 reconciler_common.go:299] "Volume detached for volume \"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/96fb8faf-c4f3-4431-a460-c27f10b43e34-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:03:30.682507 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.682516 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96fb8faf-c4f3-4431-a460-c27f10b43e34-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:03:30.682740 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.682527 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h86px\" (UniqueName: \"kubernetes.io/projected/96fb8faf-c4f3-4431-a460-c27f10b43e34-kube-api-access-h86px\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:03:30.850520 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.850418 2565 generic.go:358] "Generic (PLEG): container finished" podID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerID="a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f" exitCode=0 Apr 23 18:03:30.850939 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.850509 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" event={"ID":"96fb8faf-c4f3-4431-a460-c27f10b43e34","Type":"ContainerDied","Data":"a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f"} Apr 23 18:03:30.850939 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.850546 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" Apr 23 18:03:30.850939 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.850564 2565 scope.go:117] "RemoveContainer" containerID="b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7" Apr 23 18:03:30.850939 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.850554 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92" event={"ID":"96fb8faf-c4f3-4431-a460-c27f10b43e34","Type":"ContainerDied","Data":"6957f761a92ff725667ae14e7a3ae40a660c86fd58985a65a7f1e6f2f4d62634"} Apr 23 18:03:30.858912 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.858894 2565 scope.go:117] "RemoveContainer" containerID="a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f" Apr 23 18:03:30.865942 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.865927 2565 scope.go:117] "RemoveContainer" containerID="4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56" Apr 23 18:03:30.872608 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.872589 2565 scope.go:117] "RemoveContainer" containerID="b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7" Apr 23 18:03:30.872867 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:03:30.872847 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7\": container with ID starting with b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7 not found: ID does not exist" containerID="b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7" Apr 23 18:03:30.873025 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.872878 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7"} err="failed to get container status \"b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7\": rpc error: code = NotFound desc = could not find container \"b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7\": container with ID starting with b3401f8388ea08e314010bf9a4adc4049f3dd438ae404ec9c8df32b7c7d1bec7 not found: ID does not exist" Apr 23 18:03:30.873025 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.872905 2565 scope.go:117] "RemoveContainer" containerID="a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f" Apr 23 18:03:30.873700 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:03:30.873674 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f\": container with ID starting with a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f not found: ID does not exist" containerID="a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f" Apr 23 18:03:30.873775 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.873708 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f"} err="failed to get container status \"a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f\": rpc error: code = NotFound desc = could not find container \"a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f\": container with ID starting with a7c2aef09eeee1fefca2f23a8d5f04958bd6af44cc1988c56da1c43ab16c948f not found: ID does not exist" Apr 23 18:03:30.873775 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.873730 2565 scope.go:117] "RemoveContainer" containerID="4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56" Apr 23 18:03:30.873999 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:03:30.873980 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56\": container with ID starting with 4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56 not found: ID does not exist" containerID="4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56" Apr 23 18:03:30.874065 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.874010 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56"} err="failed to get container status \"4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56\": rpc error: code = NotFound desc = could not find container \"4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56\": container with ID starting with 4761ceb5a68e5db9d7ee0c2ee40cd7101848a78f6ca54eb18f2180d0d76ecc56 not found: ID does not exist" Apr 23 18:03:30.875257 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.875239 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92"] Apr 23 18:03:30.878891 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.878867 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-57f87c4ccb-wbq92"] Apr 23 18:03:30.912040 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:30.912011 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" path="/var/lib/kubelet/pods/96fb8faf-c4f3-4431-a460-c27f10b43e34/volumes" Apr 23 18:03:33.710749 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:33.710719 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:03:34.849398 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:34.849371 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:03:34.849946 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:34.849921 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 23 18:03:44.849918 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:44.849869 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 23 18:03:54.850252 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:03:54.850208 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 23 18:04:04.850157 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:04:04.850074 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 23 18:04:14.850723 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:04:14.850692 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:06:28.733085 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:28.733049 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-78fbd5ddc7-b64lb"] Apr 23 18:06:53.751752 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:53.751706 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-78fbd5ddc7-b64lb" podUID="6cdea478-ef91-4fff-8369-c5e45afbb677" containerName="console" containerID="cri-o://9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac" gracePeriod=15 Apr 23 18:06:53.993617 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:53.993595 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-78fbd5ddc7-b64lb_6cdea478-ef91-4fff-8369-c5e45afbb677/console/0.log" Apr 23 18:06:53.993733 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:53.993657 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 18:06:54.069398 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.069301 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-console-config\") pod \"6cdea478-ef91-4fff-8369-c5e45afbb677\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " Apr 23 18:06:54.069398 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.069387 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-service-ca\") pod \"6cdea478-ef91-4fff-8369-c5e45afbb677\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " Apr 23 18:06:54.069608 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.069424 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-serving-cert\") pod \"6cdea478-ef91-4fff-8369-c5e45afbb677\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " Apr 23 18:06:54.069608 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.069469 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv6nz\" (UniqueName: \"kubernetes.io/projected/6cdea478-ef91-4fff-8369-c5e45afbb677-kube-api-access-fv6nz\") pod \"6cdea478-ef91-4fff-8369-c5e45afbb677\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " Apr 23 18:06:54.069608 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.069488 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-trusted-ca-bundle\") pod \"6cdea478-ef91-4fff-8369-c5e45afbb677\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " Apr 23 18:06:54.069608 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.069516 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-oauth-config\") pod \"6cdea478-ef91-4fff-8369-c5e45afbb677\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " Apr 23 18:06:54.069608 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.069544 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-oauth-serving-cert\") pod \"6cdea478-ef91-4fff-8369-c5e45afbb677\" (UID: \"6cdea478-ef91-4fff-8369-c5e45afbb677\") " Apr 23 18:06:54.069859 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.069816 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-console-config" (OuterVolumeSpecName: "console-config") pod "6cdea478-ef91-4fff-8369-c5e45afbb677" (UID: "6cdea478-ef91-4fff-8369-c5e45afbb677"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:06:54.069859 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.069826 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-service-ca" (OuterVolumeSpecName: "service-ca") pod "6cdea478-ef91-4fff-8369-c5e45afbb677" (UID: "6cdea478-ef91-4fff-8369-c5e45afbb677"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:06:54.069949 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.069876 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6cdea478-ef91-4fff-8369-c5e45afbb677" (UID: "6cdea478-ef91-4fff-8369-c5e45afbb677"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:06:54.070115 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.070093 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6cdea478-ef91-4fff-8369-c5e45afbb677" (UID: "6cdea478-ef91-4fff-8369-c5e45afbb677"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:06:54.071808 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.071783 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cdea478-ef91-4fff-8369-c5e45afbb677-kube-api-access-fv6nz" (OuterVolumeSpecName: "kube-api-access-fv6nz") pod "6cdea478-ef91-4fff-8369-c5e45afbb677" (UID: "6cdea478-ef91-4fff-8369-c5e45afbb677"). InnerVolumeSpecName "kube-api-access-fv6nz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:06:54.071808 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.071788 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6cdea478-ef91-4fff-8369-c5e45afbb677" (UID: "6cdea478-ef91-4fff-8369-c5e45afbb677"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:06:54.071984 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.071858 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6cdea478-ef91-4fff-8369-c5e45afbb677" (UID: "6cdea478-ef91-4fff-8369-c5e45afbb677"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:06:54.170408 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.170375 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fv6nz\" (UniqueName: \"kubernetes.io/projected/6cdea478-ef91-4fff-8369-c5e45afbb677-kube-api-access-fv6nz\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:06:54.170408 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.170402 2565 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-trusted-ca-bundle\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:06:54.170408 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.170412 2565 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-oauth-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:06:54.170613 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.170421 2565 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-oauth-serving-cert\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:06:54.170613 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.170432 2565 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-console-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:06:54.170613 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.170440 2565 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cdea478-ef91-4fff-8369-c5e45afbb677-service-ca\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:06:54.170613 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.170448 2565 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cdea478-ef91-4fff-8369-c5e45afbb677-console-serving-cert\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:06:54.504472 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.504446 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-78fbd5ddc7-b64lb_6cdea478-ef91-4fff-8369-c5e45afbb677/console/0.log" Apr 23 18:06:54.504640 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.504488 2565 generic.go:358] "Generic (PLEG): container finished" podID="6cdea478-ef91-4fff-8369-c5e45afbb677" containerID="9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac" exitCode=2 Apr 23 18:06:54.504640 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.504521 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78fbd5ddc7-b64lb" event={"ID":"6cdea478-ef91-4fff-8369-c5e45afbb677","Type":"ContainerDied","Data":"9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac"} Apr 23 18:06:54.504640 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.504561 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78fbd5ddc7-b64lb" Apr 23 18:06:54.504640 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.504573 2565 scope.go:117] "RemoveContainer" containerID="9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac" Apr 23 18:06:54.504823 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.504563 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78fbd5ddc7-b64lb" event={"ID":"6cdea478-ef91-4fff-8369-c5e45afbb677","Type":"ContainerDied","Data":"6c5b414d01fb1caa7bfb0534297ead6da51db46966d861d01f8fec9af1d72921"} Apr 23 18:06:54.512685 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.512644 2565 scope.go:117] "RemoveContainer" containerID="9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac" Apr 23 18:06:54.512912 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:06:54.512888 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac\": container with ID starting with 9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac not found: ID does not exist" containerID="9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac" Apr 23 18:06:54.513027 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.512919 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac"} err="failed to get container status \"9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac\": rpc error: code = NotFound desc = could not find container \"9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac\": container with ID starting with 9d9c1693681ed61518eb73c3a35946eda873a5ba0b3dfae219f3758c671e44ac not found: ID does not exist" Apr 23 18:06:54.526933 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.526910 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-78fbd5ddc7-b64lb"] Apr 23 18:06:54.530081 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.530062 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-78fbd5ddc7-b64lb"] Apr 23 18:06:54.911492 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:06:54.911419 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cdea478-ef91-4fff-8369-c5e45afbb677" path="/var/lib/kubelet/pods/6cdea478-ef91-4fff-8369-c5e45afbb677/volumes" Apr 23 18:07:16.834657 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:07:16.834581 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:07:16.835693 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:07:16.835674 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:12:00.661542 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.661504 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w"] Apr 23 18:12:00.662039 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.661844 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" containerID="cri-o://7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc" gracePeriod=30 Apr 23 18:12:00.662039 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.661888 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kube-rbac-proxy" containerID="cri-o://f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4" gracePeriod=30 Apr 23 18:12:00.740157 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740119 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8"] Apr 23 18:12:00.740509 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740493 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6cdea478-ef91-4fff-8369-c5e45afbb677" containerName="console" Apr 23 18:12:00.740557 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740511 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdea478-ef91-4fff-8369-c5e45afbb677" containerName="console" Apr 23 18:12:00.740557 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740525 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="storage-initializer" Apr 23 18:12:00.740557 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740531 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="storage-initializer" Apr 23 18:12:00.740557 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740541 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" Apr 23 18:12:00.740557 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740546 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" Apr 23 18:12:00.740557 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740553 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kube-rbac-proxy" Apr 23 18:12:00.740557 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740558 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kube-rbac-proxy" Apr 23 18:12:00.740785 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740603 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kserve-container" Apr 23 18:12:00.740785 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740611 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="6cdea478-ef91-4fff-8369-c5e45afbb677" containerName="console" Apr 23 18:12:00.740785 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.740620 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="96fb8faf-c4f3-4431-a460-c27f10b43e34" containerName="kube-rbac-proxy" Apr 23 18:12:00.743579 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.743562 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:00.745981 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.745940 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-226e9-predictor-serving-cert\"" Apr 23 18:12:00.746108 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.746021 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-226e9-kube-rbac-proxy-sar-config\"" Apr 23 18:12:00.760678 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.760640 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8"] Apr 23 18:12:00.864134 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.864094 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pdx7\" (UniqueName: \"kubernetes.io/projected/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-kube-api-access-7pdx7\") pod \"success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:00.864302 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.864163 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-proxy-tls\") pod \"success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:00.864302 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.864201 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"success-200-isvc-226e9-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-success-200-isvc-226e9-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:00.965163 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.965053 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-proxy-tls\") pod \"success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:00.965163 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.965131 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"success-200-isvc-226e9-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-success-200-isvc-226e9-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:00.965398 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.965200 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7pdx7\" (UniqueName: \"kubernetes.io/projected/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-kube-api-access-7pdx7\") pod \"success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:00.965789 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.965765 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"success-200-isvc-226e9-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-success-200-isvc-226e9-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:00.967749 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.967728 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-proxy-tls\") pod \"success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:00.974107 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:00.974081 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pdx7\" (UniqueName: \"kubernetes.io/projected/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-kube-api-access-7pdx7\") pod \"success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:01.053293 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:01.053249 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:01.179557 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:01.179529 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8"] Apr 23 18:12:01.182210 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:12:01.182183 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod498d0cae_4e8f_4ad9_9039_ff9d5f34e74a.slice/crio-ff4fccc1317723c58afef73eaf1cf0a3d8d64aa4f232643c0ec13be52d1ed4e5 WatchSource:0}: Error finding container ff4fccc1317723c58afef73eaf1cf0a3d8d64aa4f232643c0ec13be52d1ed4e5: Status 404 returned error can't find the container with id ff4fccc1317723c58afef73eaf1cf0a3d8d64aa4f232643c0ec13be52d1ed4e5 Apr 23 18:12:01.184036 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:01.184012 2565 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:12:01.467935 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:01.467898 2565 generic.go:358] "Generic (PLEG): container finished" podID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerID="f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4" exitCode=2 Apr 23 18:12:01.468129 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:01.468000 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" event={"ID":"0a9cc4a6-864f-450c-a9b2-3a626d274d3b","Type":"ContainerDied","Data":"f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4"} Apr 23 18:12:01.469655 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:01.469629 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" event={"ID":"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a","Type":"ContainerStarted","Data":"db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3"} Apr 23 18:12:01.469776 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:01.469661 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" event={"ID":"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a","Type":"ContainerStarted","Data":"41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e"} Apr 23 18:12:01.469776 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:01.469674 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" event={"ID":"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a","Type":"ContainerStarted","Data":"ff4fccc1317723c58afef73eaf1cf0a3d8d64aa4f232643c0ec13be52d1ed4e5"} Apr 23 18:12:01.469776 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:01.469763 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:01.490060 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:01.490019 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" podStartSLOduration=1.4900047029999999 podStartE2EDuration="1.490004703s" podCreationTimestamp="2026-04-23 18:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:12:01.487875155 +0000 UTC m=+1185.275614436" watchObservedRunningTime="2026-04-23 18:12:01.490004703 +0000 UTC m=+1185.277743962" Apr 23 18:12:02.473336 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:02.473302 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:02.474778 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:02.474747 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.26:8080: connect: connection refused" Apr 23 18:12:03.476702 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:03.476660 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.26:8080: connect: connection refused" Apr 23 18:12:03.705381 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:03.705332 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.24:8643/healthz\": dial tcp 10.134.0.24:8643: connect: connection refused" Apr 23 18:12:03.710705 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:03.710673 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.24:8080: connect: connection refused" Apr 23 18:12:04.109360 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.109338 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:12:04.290571 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.290537 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q94c9\" (UniqueName: \"kubernetes.io/projected/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-kube-api-access-q94c9\") pod \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " Apr 23 18:12:04.290724 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.290579 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-proxy-tls\") pod \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " Apr 23 18:12:04.290724 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.290621 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"success-200-isvc-3d086-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-success-200-isvc-3d086-kube-rbac-proxy-sar-config\") pod \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\" (UID: \"0a9cc4a6-864f-450c-a9b2-3a626d274d3b\") " Apr 23 18:12:04.291107 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.291076 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-success-200-isvc-3d086-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "success-200-isvc-3d086-kube-rbac-proxy-sar-config") pod "0a9cc4a6-864f-450c-a9b2-3a626d274d3b" (UID: "0a9cc4a6-864f-450c-a9b2-3a626d274d3b"). InnerVolumeSpecName "success-200-isvc-3d086-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:12:04.292602 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.292579 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0a9cc4a6-864f-450c-a9b2-3a626d274d3b" (UID: "0a9cc4a6-864f-450c-a9b2-3a626d274d3b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:12:04.292674 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.292596 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-kube-api-access-q94c9" (OuterVolumeSpecName: "kube-api-access-q94c9") pod "0a9cc4a6-864f-450c-a9b2-3a626d274d3b" (UID: "0a9cc4a6-864f-450c-a9b2-3a626d274d3b"). InnerVolumeSpecName "kube-api-access-q94c9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:12:04.391943 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.391900 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q94c9\" (UniqueName: \"kubernetes.io/projected/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-kube-api-access-q94c9\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:12:04.391943 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.391935 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:12:04.391943 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.391950 2565 reconciler_common.go:299] "Volume detached for volume \"success-200-isvc-3d086-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/0a9cc4a6-864f-450c-a9b2-3a626d274d3b-success-200-isvc-3d086-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:12:04.481482 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.481443 2565 generic.go:358] "Generic (PLEG): container finished" podID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerID="7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc" exitCode=0 Apr 23 18:12:04.481944 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.481529 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" Apr 23 18:12:04.481944 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.481528 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" event={"ID":"0a9cc4a6-864f-450c-a9b2-3a626d274d3b","Type":"ContainerDied","Data":"7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc"} Apr 23 18:12:04.481944 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.481575 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w" event={"ID":"0a9cc4a6-864f-450c-a9b2-3a626d274d3b","Type":"ContainerDied","Data":"4fe5fbcf3937d1e335be48aec9cae64408d8f82cd992ace73d9e4764b79cc0aa"} Apr 23 18:12:04.481944 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.481594 2565 scope.go:117] "RemoveContainer" containerID="f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4" Apr 23 18:12:04.490679 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.490664 2565 scope.go:117] "RemoveContainer" containerID="7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc" Apr 23 18:12:04.497609 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.497594 2565 scope.go:117] "RemoveContainer" containerID="f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4" Apr 23 18:12:04.497850 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:12:04.497831 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4\": container with ID starting with f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4 not found: ID does not exist" containerID="f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4" Apr 23 18:12:04.497915 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.497863 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4"} err="failed to get container status \"f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4\": rpc error: code = NotFound desc = could not find container \"f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4\": container with ID starting with f86fde03d22e5fe2e73f281b102a019f59448d47634397c399aa910b645e39b4 not found: ID does not exist" Apr 23 18:12:04.497915 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.497888 2565 scope.go:117] "RemoveContainer" containerID="7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc" Apr 23 18:12:04.498150 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:12:04.498133 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc\": container with ID starting with 7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc not found: ID does not exist" containerID="7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc" Apr 23 18:12:04.498189 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.498156 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc"} err="failed to get container status \"7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc\": rpc error: code = NotFound desc = could not find container \"7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc\": container with ID starting with 7cf1a98933ba1a3fb13a9ad835db376453d743cd633f767e11cc6ee2018a6bfc not found: ID does not exist" Apr 23 18:12:04.503673 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.503651 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w"] Apr 23 18:12:04.507933 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.507912 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-3d086-predictor-67ccc5b954-l2q2w"] Apr 23 18:12:04.910753 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:04.910720 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" path="/var/lib/kubelet/pods/0a9cc4a6-864f-450c-a9b2-3a626d274d3b/volumes" Apr 23 18:12:08.481911 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:08.481882 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:08.482460 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:08.482430 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.26:8080: connect: connection refused" Apr 23 18:12:16.856012 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:16.855980 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:12:16.856406 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:16.856248 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:12:18.482525 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:18.482481 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.26:8080: connect: connection refused" Apr 23 18:12:28.483429 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:28.483389 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.26:8080: connect: connection refused" Apr 23 18:12:38.482635 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:38.482594 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.26:8080: connect: connection refused" Apr 23 18:12:40.687272 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.687241 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls"] Apr 23 18:12:40.687742 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.687505 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kserve-container" containerID="cri-o://185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8" gracePeriod=30 Apr 23 18:12:40.687742 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.687555 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kube-rbac-proxy" containerID="cri-o://aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70" gracePeriod=30 Apr 23 18:12:40.730840 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.730811 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89"] Apr 23 18:12:40.731200 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.731185 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" Apr 23 18:12:40.731267 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.731201 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" Apr 23 18:12:40.731267 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.731217 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kube-rbac-proxy" Apr 23 18:12:40.731267 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.731222 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kube-rbac-proxy" Apr 23 18:12:40.731416 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.731270 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kserve-container" Apr 23 18:12:40.731416 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.731278 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="0a9cc4a6-864f-450c-a9b2-3a626d274d3b" containerName="kube-rbac-proxy" Apr 23 18:12:40.734084 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.734068 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:40.736592 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.736574 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-d864a-predictor-serving-cert\"" Apr 23 18:12:40.736837 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.736819 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-d864a-kube-rbac-proxy-sar-config\"" Apr 23 18:12:40.745470 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.745446 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89"] Apr 23 18:12:40.769801 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.769776 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"success-200-isvc-d864a-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-success-200-isvc-d864a-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:40.769943 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.769817 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsk82\" (UniqueName: \"kubernetes.io/projected/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-kube-api-access-rsk82\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:40.769943 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.769841 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:40.870596 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.870555 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsk82\" (UniqueName: \"kubernetes.io/projected/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-kube-api-access-rsk82\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:40.870778 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.870602 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:40.870778 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.870692 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"success-200-isvc-d864a-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-success-200-isvc-d864a-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:40.870891 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:12:40.870772 2565 secret.go:189] Couldn't get secret kserve-ci-e2e-test/success-200-isvc-d864a-predictor-serving-cert: secret "success-200-isvc-d864a-predictor-serving-cert" not found Apr 23 18:12:40.870891 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:12:40.870849 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls podName:ea5c6196-c9a6-4e90-92bd-0fc565afcb05 nodeName:}" failed. No retries permitted until 2026-04-23 18:12:41.370832863 +0000 UTC m=+1225.158572108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls") pod "success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" (UID: "ea5c6196-c9a6-4e90-92bd-0fc565afcb05") : secret "success-200-isvc-d864a-predictor-serving-cert" not found Apr 23 18:12:40.871362 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.871339 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"success-200-isvc-d864a-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-success-200-isvc-d864a-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:40.879521 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:40.879499 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsk82\" (UniqueName: \"kubernetes.io/projected/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-kube-api-access-rsk82\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:41.375471 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:41.375431 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:41.375648 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:12:41.375580 2565 secret.go:189] Couldn't get secret kserve-ci-e2e-test/success-200-isvc-d864a-predictor-serving-cert: secret "success-200-isvc-d864a-predictor-serving-cert" not found Apr 23 18:12:41.375648 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:12:41.375645 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls podName:ea5c6196-c9a6-4e90-92bd-0fc565afcb05 nodeName:}" failed. No retries permitted until 2026-04-23 18:12:42.375627052 +0000 UTC m=+1226.163366294 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls") pod "success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" (UID: "ea5c6196-c9a6-4e90-92bd-0fc565afcb05") : secret "success-200-isvc-d864a-predictor-serving-cert" not found Apr 23 18:12:41.600634 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:41.600602 2565 generic.go:358] "Generic (PLEG): container finished" podID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerID="aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70" exitCode=2 Apr 23 18:12:41.600789 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:41.600673 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" event={"ID":"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714","Type":"ContainerDied","Data":"aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70"} Apr 23 18:12:42.384882 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:42.384839 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:42.387193 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:42.387172 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls\") pod \"success-200-isvc-d864a-predictor-7d759dc8c4-8pc89\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:42.546302 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:42.546264 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:42.670354 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:42.670329 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89"] Apr 23 18:12:42.672258 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:12:42.672219 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea5c6196_c9a6_4e90_92bd_0fc565afcb05.slice/crio-6afa1bd5fd62c81c20a684f7dff329f0a54d20f137fc4fa29792d122676d1677 WatchSource:0}: Error finding container 6afa1bd5fd62c81c20a684f7dff329f0a54d20f137fc4fa29792d122676d1677: Status 404 returned error can't find the container with id 6afa1bd5fd62c81c20a684f7dff329f0a54d20f137fc4fa29792d122676d1677 Apr 23 18:12:43.614546 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:43.614511 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" event={"ID":"ea5c6196-c9a6-4e90-92bd-0fc565afcb05","Type":"ContainerStarted","Data":"d26db36aeb7ab2013937a470964f56485d8faee25f303ff1135f75dd1496ad41"} Apr 23 18:12:43.614546 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:43.614548 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" event={"ID":"ea5c6196-c9a6-4e90-92bd-0fc565afcb05","Type":"ContainerStarted","Data":"0b61d55a0889f1634e4fc7940bf4c0d6079d529322d0205181b0f32bb46b192f"} Apr 23 18:12:43.614985 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:43.614559 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" event={"ID":"ea5c6196-c9a6-4e90-92bd-0fc565afcb05","Type":"ContainerStarted","Data":"6afa1bd5fd62c81c20a684f7dff329f0a54d20f137fc4fa29792d122676d1677"} Apr 23 18:12:43.614985 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:43.614658 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:43.634285 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:43.634238 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" podStartSLOduration=3.634222402 podStartE2EDuration="3.634222402s" podCreationTimestamp="2026-04-23 18:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:12:43.633166654 +0000 UTC m=+1227.420905938" watchObservedRunningTime="2026-04-23 18:12:43.634222402 +0000 UTC m=+1227.421961662" Apr 23 18:12:44.136133 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.136109 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:12:44.200837 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.200763 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-proxy-tls\") pod \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " Apr 23 18:12:44.200837 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.200801 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"success-200-isvc-0065f-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-success-200-isvc-0065f-kube-rbac-proxy-sar-config\") pod \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " Apr 23 18:12:44.200837 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.200827 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbmjj\" (UniqueName: \"kubernetes.io/projected/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-kube-api-access-zbmjj\") pod \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\" (UID: \"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714\") " Apr 23 18:12:44.201216 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.201189 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-success-200-isvc-0065f-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "success-200-isvc-0065f-kube-rbac-proxy-sar-config") pod "59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" (UID: "59ee2c69-d1b8-4c6f-a69b-20e2cf2de714"). InnerVolumeSpecName "success-200-isvc-0065f-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:12:44.202970 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.202916 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-kube-api-access-zbmjj" (OuterVolumeSpecName: "kube-api-access-zbmjj") pod "59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" (UID: "59ee2c69-d1b8-4c6f-a69b-20e2cf2de714"). InnerVolumeSpecName "kube-api-access-zbmjj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:12:44.203072 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.202949 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" (UID: "59ee2c69-d1b8-4c6f-a69b-20e2cf2de714"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:12:44.302350 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.302321 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:12:44.302350 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.302347 2565 reconciler_common.go:299] "Volume detached for volume \"success-200-isvc-0065f-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-success-200-isvc-0065f-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:12:44.302515 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.302358 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zbmjj\" (UniqueName: \"kubernetes.io/projected/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714-kube-api-access-zbmjj\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:12:44.618467 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.618433 2565 generic.go:358] "Generic (PLEG): container finished" podID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerID="185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8" exitCode=0 Apr 23 18:12:44.618931 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.618525 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" Apr 23 18:12:44.618931 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.618522 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" event={"ID":"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714","Type":"ContainerDied","Data":"185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8"} Apr 23 18:12:44.618931 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.618656 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls" event={"ID":"59ee2c69-d1b8-4c6f-a69b-20e2cf2de714","Type":"ContainerDied","Data":"b1ec26b8e87fd109d7c61ff0741f084dff5479f9a9838fc57f5c16a629ea7193"} Apr 23 18:12:44.618931 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.618674 2565 scope.go:117] "RemoveContainer" containerID="aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70" Apr 23 18:12:44.619190 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.619160 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:44.620764 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.620737 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.27:8080: connect: connection refused" Apr 23 18:12:44.627548 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.627535 2565 scope.go:117] "RemoveContainer" containerID="185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8" Apr 23 18:12:44.634910 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.634889 2565 scope.go:117] "RemoveContainer" containerID="aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70" Apr 23 18:12:44.635146 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:12:44.635130 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70\": container with ID starting with aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70 not found: ID does not exist" containerID="aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70" Apr 23 18:12:44.635194 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.635156 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70"} err="failed to get container status \"aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70\": rpc error: code = NotFound desc = could not find container \"aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70\": container with ID starting with aa44ab94939ade8e652de9639bfc4f1df5072e77852a32aaf27dd24b75685b70 not found: ID does not exist" Apr 23 18:12:44.635194 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.635173 2565 scope.go:117] "RemoveContainer" containerID="185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8" Apr 23 18:12:44.635358 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:12:44.635343 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8\": container with ID starting with 185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8 not found: ID does not exist" containerID="185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8" Apr 23 18:12:44.635406 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.635362 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8"} err="failed to get container status \"185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8\": rpc error: code = NotFound desc = could not find container \"185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8\": container with ID starting with 185e5d4af2adca15f92dc3886cfd53e191fdce6ee1acdb575b341b74071d48a8 not found: ID does not exist" Apr 23 18:12:44.642541 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.642520 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls"] Apr 23 18:12:44.646736 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.646716 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0065f-predictor-76669b4f4-cd5ls"] Apr 23 18:12:44.911102 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:44.911029 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" path="/var/lib/kubelet/pods/59ee2c69-d1b8-4c6f-a69b-20e2cf2de714/volumes" Apr 23 18:12:45.622411 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:45.622364 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.27:8080: connect: connection refused" Apr 23 18:12:48.483131 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:48.483102 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:12:50.627977 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:50.627927 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:12:50.628371 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:12:50.628349 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.27:8080: connect: connection refused" Apr 23 18:13:00.628907 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:00.628813 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.27:8080: connect: connection refused" Apr 23 18:13:10.629099 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:10.629055 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.27:8080: connect: connection refused" Apr 23 18:13:10.974808 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:10.974717 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8"] Apr 23 18:13:10.975120 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:10.975085 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kserve-container" containerID="cri-o://41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e" gracePeriod=30 Apr 23 18:13:10.975295 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:10.975140 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kube-rbac-proxy" containerID="cri-o://db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3" gracePeriod=30 Apr 23 18:13:11.001133 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.001097 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs"] Apr 23 18:13:11.001490 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.001470 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kube-rbac-proxy" Apr 23 18:13:11.001490 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.001488 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kube-rbac-proxy" Apr 23 18:13:11.001637 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.001508 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kserve-container" Apr 23 18:13:11.001637 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.001514 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kserve-container" Apr 23 18:13:11.001637 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.001577 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kserve-container" Apr 23 18:13:11.001637 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.001587 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="59ee2c69-d1b8-4c6f-a69b-20e2cf2de714" containerName="kube-rbac-proxy" Apr 23 18:13:11.004860 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.004834 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.007437 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.007406 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-5c6fb-predictor-serving-cert\"" Apr 23 18:13:11.007842 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.007488 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\"" Apr 23 18:13:11.015556 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.015491 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs"] Apr 23 18:13:11.121518 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.121484 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aeb4642d-f56a-442e-9c6e-e2354973f7cb-proxy-tls\") pod \"success-200-isvc-5c6fb-predictor-85d68895bd-htcgs\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.121697 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.121527 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/aeb4642d-f56a-442e-9c6e-e2354973f7cb-success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-5c6fb-predictor-85d68895bd-htcgs\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.121697 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.121612 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xv6z\" (UniqueName: \"kubernetes.io/projected/aeb4642d-f56a-442e-9c6e-e2354973f7cb-kube-api-access-5xv6z\") pod \"success-200-isvc-5c6fb-predictor-85d68895bd-htcgs\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.222897 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.222856 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aeb4642d-f56a-442e-9c6e-e2354973f7cb-proxy-tls\") pod \"success-200-isvc-5c6fb-predictor-85d68895bd-htcgs\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.223106 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.222905 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/aeb4642d-f56a-442e-9c6e-e2354973f7cb-success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-5c6fb-predictor-85d68895bd-htcgs\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.223106 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.223005 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5xv6z\" (UniqueName: \"kubernetes.io/projected/aeb4642d-f56a-442e-9c6e-e2354973f7cb-kube-api-access-5xv6z\") pod \"success-200-isvc-5c6fb-predictor-85d68895bd-htcgs\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.223610 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.223576 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/aeb4642d-f56a-442e-9c6e-e2354973f7cb-success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-5c6fb-predictor-85d68895bd-htcgs\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.225787 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.225713 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aeb4642d-f56a-442e-9c6e-e2354973f7cb-proxy-tls\") pod \"success-200-isvc-5c6fb-predictor-85d68895bd-htcgs\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.233301 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.233275 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xv6z\" (UniqueName: \"kubernetes.io/projected/aeb4642d-f56a-442e-9c6e-e2354973f7cb-kube-api-access-5xv6z\") pod \"success-200-isvc-5c6fb-predictor-85d68895bd-htcgs\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.319189 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.319150 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.452520 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.452491 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs"] Apr 23 18:13:11.454868 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:13:11.454841 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeb4642d_f56a_442e_9c6e_e2354973f7cb.slice/crio-75866fc0480b9bb3a82ff8129164def5a43f3e8d3633f5bd8051d74e103b4dd0 WatchSource:0}: Error finding container 75866fc0480b9bb3a82ff8129164def5a43f3e8d3633f5bd8051d74e103b4dd0: Status 404 returned error can't find the container with id 75866fc0480b9bb3a82ff8129164def5a43f3e8d3633f5bd8051d74e103b4dd0 Apr 23 18:13:11.705595 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.705557 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" event={"ID":"aeb4642d-f56a-442e-9c6e-e2354973f7cb","Type":"ContainerStarted","Data":"2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0"} Apr 23 18:13:11.705595 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.705600 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" event={"ID":"aeb4642d-f56a-442e-9c6e-e2354973f7cb","Type":"ContainerStarted","Data":"e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f"} Apr 23 18:13:11.706115 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.705615 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" event={"ID":"aeb4642d-f56a-442e-9c6e-e2354973f7cb","Type":"ContainerStarted","Data":"75866fc0480b9bb3a82ff8129164def5a43f3e8d3633f5bd8051d74e103b4dd0"} Apr 23 18:13:11.706115 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.705771 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.706115 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.705810 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:11.707528 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.707493 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 23 18:13:11.707528 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.707514 2565 generic.go:358] "Generic (PLEG): container finished" podID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerID="db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3" exitCode=2 Apr 23 18:13:11.707730 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.707550 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" event={"ID":"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a","Type":"ContainerDied","Data":"db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3"} Apr 23 18:13:11.724915 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:11.724861 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podStartSLOduration=1.7248468799999999 podStartE2EDuration="1.72484688s" podCreationTimestamp="2026-04-23 18:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:13:11.7240852 +0000 UTC m=+1255.511824471" watchObservedRunningTime="2026-04-23 18:13:11.72484688 +0000 UTC m=+1255.512586139" Apr 23 18:13:12.710856 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:12.710808 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 23 18:13:13.477662 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:13.477621 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.26:8643/healthz\": dial tcp 10.134.0.26:8643: connect: connection refused" Apr 23 18:13:14.319279 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.319254 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:13:14.454585 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.454497 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"success-200-isvc-226e9-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-success-200-isvc-226e9-kube-rbac-proxy-sar-config\") pod \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " Apr 23 18:13:14.454733 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.454599 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pdx7\" (UniqueName: \"kubernetes.io/projected/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-kube-api-access-7pdx7\") pod \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " Apr 23 18:13:14.454733 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.454652 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-proxy-tls\") pod \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\" (UID: \"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a\") " Apr 23 18:13:14.455026 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.454759 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-success-200-isvc-226e9-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "success-200-isvc-226e9-kube-rbac-proxy-sar-config") pod "498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" (UID: "498d0cae-4e8f-4ad9-9039-ff9d5f34e74a"). InnerVolumeSpecName "success-200-isvc-226e9-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:13:14.455026 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.454943 2565 reconciler_common.go:299] "Volume detached for volume \"success-200-isvc-226e9-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-success-200-isvc-226e9-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:13:14.456693 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.456672 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-kube-api-access-7pdx7" (OuterVolumeSpecName: "kube-api-access-7pdx7") pod "498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" (UID: "498d0cae-4e8f-4ad9-9039-ff9d5f34e74a"). InnerVolumeSpecName "kube-api-access-7pdx7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:13:14.456693 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.456686 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" (UID: "498d0cae-4e8f-4ad9-9039-ff9d5f34e74a"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:13:14.556066 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.556025 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7pdx7\" (UniqueName: \"kubernetes.io/projected/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-kube-api-access-7pdx7\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:13:14.556066 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.556062 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:13:14.717816 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.717713 2565 generic.go:358] "Generic (PLEG): container finished" podID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerID="41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e" exitCode=0 Apr 23 18:13:14.717816 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.717805 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" Apr 23 18:13:14.717816 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.717800 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" event={"ID":"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a","Type":"ContainerDied","Data":"41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e"} Apr 23 18:13:14.718153 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.717849 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8" event={"ID":"498d0cae-4e8f-4ad9-9039-ff9d5f34e74a","Type":"ContainerDied","Data":"ff4fccc1317723c58afef73eaf1cf0a3d8d64aa4f232643c0ec13be52d1ed4e5"} Apr 23 18:13:14.718153 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.717869 2565 scope.go:117] "RemoveContainer" containerID="db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3" Apr 23 18:13:14.725968 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.725909 2565 scope.go:117] "RemoveContainer" containerID="41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e" Apr 23 18:13:14.733784 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.733761 2565 scope.go:117] "RemoveContainer" containerID="db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3" Apr 23 18:13:14.734052 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:13:14.734033 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3\": container with ID starting with db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3 not found: ID does not exist" containerID="db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3" Apr 23 18:13:14.734128 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.734061 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3"} err="failed to get container status \"db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3\": rpc error: code = NotFound desc = could not find container \"db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3\": container with ID starting with db0b195f27aa6075f0a6249cf5cc631538dc33c30aeafd96639c4f75108294e3 not found: ID does not exist" Apr 23 18:13:14.734128 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.734080 2565 scope.go:117] "RemoveContainer" containerID="41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e" Apr 23 18:13:14.734325 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:13:14.734308 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e\": container with ID starting with 41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e not found: ID does not exist" containerID="41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e" Apr 23 18:13:14.734366 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.734331 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e"} err="failed to get container status \"41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e\": rpc error: code = NotFound desc = could not find container \"41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e\": container with ID starting with 41ca5767376b082de0048065be00cc11db0b782ed530fd2ac8c2c1215a32785e not found: ID does not exist" Apr 23 18:13:14.741598 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.741576 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8"] Apr 23 18:13:14.745646 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.745620 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-226e9-predictor-5c7d88cfd8-hpjr8"] Apr 23 18:13:14.911559 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:14.911520 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" path="/var/lib/kubelet/pods/498d0cae-4e8f-4ad9-9039-ff9d5f34e74a/volumes" Apr 23 18:13:17.715592 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:17.715563 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:13:17.716173 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:17.716147 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 23 18:13:20.628922 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:20.628872 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.27:8080: connect: connection refused" Apr 23 18:13:27.716537 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:27.716494 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 23 18:13:30.629201 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:30.629173 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:13:37.716352 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:37.716316 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 23 18:13:47.717042 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:47.716984 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 23 18:13:57.717987 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:13:57.717538 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:14:00.944688 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.944646 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89"] Apr 23 18:14:00.945131 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.945045 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kserve-container" containerID="cri-o://0b61d55a0889f1634e4fc7940bf4c0d6079d529322d0205181b0f32bb46b192f" gracePeriod=30 Apr 23 18:14:00.945336 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.945291 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kube-rbac-proxy" containerID="cri-o://d26db36aeb7ab2013937a470964f56485d8faee25f303ff1135f75dd1496ad41" gracePeriod=30 Apr 23 18:14:00.968399 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.968367 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65"] Apr 23 18:14:00.968694 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.968682 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kube-rbac-proxy" Apr 23 18:14:00.968778 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.968696 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kube-rbac-proxy" Apr 23 18:14:00.968778 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.968713 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kserve-container" Apr 23 18:14:00.968778 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.968718 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kserve-container" Apr 23 18:14:00.968778 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.968773 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kube-rbac-proxy" Apr 23 18:14:00.968907 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.968783 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="498d0cae-4e8f-4ad9-9039-ff9d5f34e74a" containerName="kserve-container" Apr 23 18:14:00.971833 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.971816 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:00.974353 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.974328 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-e1fac-kube-rbac-proxy-sar-config\"" Apr 23 18:14:00.976194 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.976173 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-e1fac-predictor-serving-cert\"" Apr 23 18:14:00.981579 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:00.981553 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65"] Apr 23 18:14:01.032515 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.032479 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f993db0-82b6-4d8a-ac9d-056859ab0898-proxy-tls\") pod \"success-200-isvc-e1fac-predictor-b698b876d-t4s65\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.032697 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.032594 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsc6q\" (UniqueName: \"kubernetes.io/projected/3f993db0-82b6-4d8a-ac9d-056859ab0898-kube-api-access-hsc6q\") pod \"success-200-isvc-e1fac-predictor-b698b876d-t4s65\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.032835 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.032812 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"success-200-isvc-e1fac-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/3f993db0-82b6-4d8a-ac9d-056859ab0898-success-200-isvc-e1fac-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-e1fac-predictor-b698b876d-t4s65\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.133860 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.133824 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hsc6q\" (UniqueName: \"kubernetes.io/projected/3f993db0-82b6-4d8a-ac9d-056859ab0898-kube-api-access-hsc6q\") pod \"success-200-isvc-e1fac-predictor-b698b876d-t4s65\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.134040 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.133893 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"success-200-isvc-e1fac-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/3f993db0-82b6-4d8a-ac9d-056859ab0898-success-200-isvc-e1fac-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-e1fac-predictor-b698b876d-t4s65\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.134040 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.133943 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f993db0-82b6-4d8a-ac9d-056859ab0898-proxy-tls\") pod \"success-200-isvc-e1fac-predictor-b698b876d-t4s65\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.134584 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.134557 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"success-200-isvc-e1fac-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/3f993db0-82b6-4d8a-ac9d-056859ab0898-success-200-isvc-e1fac-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-e1fac-predictor-b698b876d-t4s65\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.136393 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.136369 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f993db0-82b6-4d8a-ac9d-056859ab0898-proxy-tls\") pod \"success-200-isvc-e1fac-predictor-b698b876d-t4s65\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.142774 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.142753 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsc6q\" (UniqueName: \"kubernetes.io/projected/3f993db0-82b6-4d8a-ac9d-056859ab0898-kube-api-access-hsc6q\") pod \"success-200-isvc-e1fac-predictor-b698b876d-t4s65\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.283942 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.283898 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.417693 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.417655 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65"] Apr 23 18:14:01.421126 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:14:01.421097 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f993db0_82b6_4d8a_ac9d_056859ab0898.slice/crio-9dbb5c993829a7e6dad0d1c5297fcb91370f32cf5cc4e65ed593093eb7c77945 WatchSource:0}: Error finding container 9dbb5c993829a7e6dad0d1c5297fcb91370f32cf5cc4e65ed593093eb7c77945: Status 404 returned error can't find the container with id 9dbb5c993829a7e6dad0d1c5297fcb91370f32cf5cc4e65ed593093eb7c77945 Apr 23 18:14:01.869511 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.869479 2565 generic.go:358] "Generic (PLEG): container finished" podID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerID="d26db36aeb7ab2013937a470964f56485d8faee25f303ff1135f75dd1496ad41" exitCode=2 Apr 23 18:14:01.869692 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.869544 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" event={"ID":"ea5c6196-c9a6-4e90-92bd-0fc565afcb05","Type":"ContainerDied","Data":"d26db36aeb7ab2013937a470964f56485d8faee25f303ff1135f75dd1496ad41"} Apr 23 18:14:01.871182 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.871159 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" event={"ID":"3f993db0-82b6-4d8a-ac9d-056859ab0898","Type":"ContainerStarted","Data":"fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9"} Apr 23 18:14:01.871301 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.871186 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" event={"ID":"3f993db0-82b6-4d8a-ac9d-056859ab0898","Type":"ContainerStarted","Data":"db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f"} Apr 23 18:14:01.871301 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.871196 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" event={"ID":"3f993db0-82b6-4d8a-ac9d-056859ab0898","Type":"ContainerStarted","Data":"9dbb5c993829a7e6dad0d1c5297fcb91370f32cf5cc4e65ed593093eb7c77945"} Apr 23 18:14:01.871301 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.871295 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:01.892155 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:01.892118 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podStartSLOduration=1.892106111 podStartE2EDuration="1.892106111s" podCreationTimestamp="2026-04-23 18:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:14:01.889370313 +0000 UTC m=+1305.677109572" watchObservedRunningTime="2026-04-23 18:14:01.892106111 +0000 UTC m=+1305.679845428" Apr 23 18:14:02.873966 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:02.873924 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:02.875216 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:02.875186 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 23 18:14:03.878365 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.878338 2565 generic.go:358] "Generic (PLEG): container finished" podID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerID="0b61d55a0889f1634e4fc7940bf4c0d6079d529322d0205181b0f32bb46b192f" exitCode=0 Apr 23 18:14:03.878745 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.878412 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" event={"ID":"ea5c6196-c9a6-4e90-92bd-0fc565afcb05","Type":"ContainerDied","Data":"0b61d55a0889f1634e4fc7940bf4c0d6079d529322d0205181b0f32bb46b192f"} Apr 23 18:14:03.878745 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.878447 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" event={"ID":"ea5c6196-c9a6-4e90-92bd-0fc565afcb05","Type":"ContainerDied","Data":"6afa1bd5fd62c81c20a684f7dff329f0a54d20f137fc4fa29792d122676d1677"} Apr 23 18:14:03.878745 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.878460 2565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6afa1bd5fd62c81c20a684f7dff329f0a54d20f137fc4fa29792d122676d1677" Apr 23 18:14:03.878913 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.878775 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 23 18:14:03.889263 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.889222 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:14:03.958150 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.958126 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsk82\" (UniqueName: \"kubernetes.io/projected/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-kube-api-access-rsk82\") pod \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " Apr 23 18:14:03.958255 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.958173 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"success-200-isvc-d864a-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-success-200-isvc-d864a-kube-rbac-proxy-sar-config\") pod \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " Apr 23 18:14:03.958255 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.958201 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls\") pod \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\" (UID: \"ea5c6196-c9a6-4e90-92bd-0fc565afcb05\") " Apr 23 18:14:03.958550 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.958518 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-success-200-isvc-d864a-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "success-200-isvc-d864a-kube-rbac-proxy-sar-config") pod "ea5c6196-c9a6-4e90-92bd-0fc565afcb05" (UID: "ea5c6196-c9a6-4e90-92bd-0fc565afcb05"). InnerVolumeSpecName "success-200-isvc-d864a-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:14:03.960241 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.960216 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "ea5c6196-c9a6-4e90-92bd-0fc565afcb05" (UID: "ea5c6196-c9a6-4e90-92bd-0fc565afcb05"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:14:03.960241 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:03.960227 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-kube-api-access-rsk82" (OuterVolumeSpecName: "kube-api-access-rsk82") pod "ea5c6196-c9a6-4e90-92bd-0fc565afcb05" (UID: "ea5c6196-c9a6-4e90-92bd-0fc565afcb05"). InnerVolumeSpecName "kube-api-access-rsk82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:14:04.059571 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:04.059543 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rsk82\" (UniqueName: \"kubernetes.io/projected/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-kube-api-access-rsk82\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:14:04.059571 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:04.059571 2565 reconciler_common.go:299] "Volume detached for volume \"success-200-isvc-d864a-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-success-200-isvc-d864a-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:14:04.059756 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:04.059604 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea5c6196-c9a6-4e90-92bd-0fc565afcb05-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:14:04.881597 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:04.881564 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89" Apr 23 18:14:04.904730 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:04.904705 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89"] Apr 23 18:14:04.911425 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:04.911403 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d864a-predictor-7d759dc8c4-8pc89"] Apr 23 18:14:06.911878 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:06.911838 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" path="/var/lib/kubelet/pods/ea5c6196-c9a6-4e90-92bd-0fc565afcb05/volumes" Apr 23 18:14:08.883274 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:08.883247 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:14:08.883777 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:08.883753 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 23 18:14:18.884236 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:18.884189 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 23 18:14:28.884097 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:28.884014 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 23 18:14:38.884299 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:38.884255 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 23 18:14:48.884696 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:14:48.884666 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:17:16.877096 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:17:16.877067 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:17:16.878090 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:17:16.878073 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:19:16.914713 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:19:16.914643 2565 scope.go:117] "RemoveContainer" containerID="0b61d55a0889f1634e4fc7940bf4c0d6079d529322d0205181b0f32bb46b192f" Apr 23 18:19:16.922244 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:19:16.922216 2565 scope.go:117] "RemoveContainer" containerID="d26db36aeb7ab2013937a470964f56485d8faee25f303ff1135f75dd1496ad41" Apr 23 18:22:16.897504 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:16.897474 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:22:16.899255 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:16.899234 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:22:25.967390 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:25.967352 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs"] Apr 23 18:22:25.967990 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:25.967728 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" containerID="cri-o://e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f" gracePeriod=30 Apr 23 18:22:25.967990 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:25.967800 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kube-rbac-proxy" containerID="cri-o://2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0" gracePeriod=30 Apr 23 18:22:26.055260 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.055225 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq"] Apr 23 18:22:26.055560 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.055549 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kube-rbac-proxy" Apr 23 18:22:26.055605 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.055562 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kube-rbac-proxy" Apr 23 18:22:26.055605 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.055573 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kserve-container" Apr 23 18:22:26.055605 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.055579 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kserve-container" Apr 23 18:22:26.055704 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.055636 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kube-rbac-proxy" Apr 23 18:22:26.055704 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.055645 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea5c6196-c9a6-4e90-92bd-0fc565afcb05" containerName="kserve-container" Apr 23 18:22:26.058813 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.058787 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.061302 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.061278 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-a93f3-predictor-serving-cert\"" Apr 23 18:22:26.061428 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.061340 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-a93f3-kube-rbac-proxy-sar-config\"" Apr 23 18:22:26.077500 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.077468 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq"] Apr 23 18:22:26.115414 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.115386 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f12765e8-eacd-45c5-99b4-6c3db88f3198-proxy-tls\") pod \"success-200-isvc-a93f3-predictor-5c444dc64-lkwsq\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.115589 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.115442 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"success-200-isvc-a93f3-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/f12765e8-eacd-45c5-99b4-6c3db88f3198-success-200-isvc-a93f3-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-a93f3-predictor-5c444dc64-lkwsq\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.115589 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.115501 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqx5k\" (UniqueName: \"kubernetes.io/projected/f12765e8-eacd-45c5-99b4-6c3db88f3198-kube-api-access-sqx5k\") pod \"success-200-isvc-a93f3-predictor-5c444dc64-lkwsq\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.216295 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.216257 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f12765e8-eacd-45c5-99b4-6c3db88f3198-proxy-tls\") pod \"success-200-isvc-a93f3-predictor-5c444dc64-lkwsq\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.216482 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.216333 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"success-200-isvc-a93f3-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/f12765e8-eacd-45c5-99b4-6c3db88f3198-success-200-isvc-a93f3-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-a93f3-predictor-5c444dc64-lkwsq\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.216482 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.216379 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sqx5k\" (UniqueName: \"kubernetes.io/projected/f12765e8-eacd-45c5-99b4-6c3db88f3198-kube-api-access-sqx5k\") pod \"success-200-isvc-a93f3-predictor-5c444dc64-lkwsq\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.216482 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:22:26.216411 2565 secret.go:189] Couldn't get secret kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-serving-cert: secret "success-200-isvc-a93f3-predictor-serving-cert" not found Apr 23 18:22:26.216482 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:22:26.216480 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f12765e8-eacd-45c5-99b4-6c3db88f3198-proxy-tls podName:f12765e8-eacd-45c5-99b4-6c3db88f3198 nodeName:}" failed. No retries permitted until 2026-04-23 18:22:26.716457568 +0000 UTC m=+1810.504196821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f12765e8-eacd-45c5-99b4-6c3db88f3198-proxy-tls") pod "success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" (UID: "f12765e8-eacd-45c5-99b4-6c3db88f3198") : secret "success-200-isvc-a93f3-predictor-serving-cert" not found Apr 23 18:22:26.217086 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.217064 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"success-200-isvc-a93f3-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/f12765e8-eacd-45c5-99b4-6c3db88f3198-success-200-isvc-a93f3-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-a93f3-predictor-5c444dc64-lkwsq\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.225848 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.225786 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqx5k\" (UniqueName: \"kubernetes.io/projected/f12765e8-eacd-45c5-99b4-6c3db88f3198-kube-api-access-sqx5k\") pod \"success-200-isvc-a93f3-predictor-5c444dc64-lkwsq\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.499083 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.499040 2565 generic.go:358] "Generic (PLEG): container finished" podID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerID="2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0" exitCode=2 Apr 23 18:22:26.499225 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.499113 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" event={"ID":"aeb4642d-f56a-442e-9c6e-e2354973f7cb","Type":"ContainerDied","Data":"2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0"} Apr 23 18:22:26.719101 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.719057 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f12765e8-eacd-45c5-99b4-6c3db88f3198-proxy-tls\") pod \"success-200-isvc-a93f3-predictor-5c444dc64-lkwsq\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.721468 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.721441 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f12765e8-eacd-45c5-99b4-6c3db88f3198-proxy-tls\") pod \"success-200-isvc-a93f3-predictor-5c444dc64-lkwsq\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:26.969521 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:26.969432 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:27.089621 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:27.089588 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq"] Apr 23 18:22:27.093414 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:22:27.093386 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf12765e8_eacd_45c5_99b4_6c3db88f3198.slice/crio-35a4e72a62ea738db6c663fed1325bef266d5d982f0cc2a141225584ebfeb7fa WatchSource:0}: Error finding container 35a4e72a62ea738db6c663fed1325bef266d5d982f0cc2a141225584ebfeb7fa: Status 404 returned error can't find the container with id 35a4e72a62ea738db6c663fed1325bef266d5d982f0cc2a141225584ebfeb7fa Apr 23 18:22:27.095221 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:27.095204 2565 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:22:27.504565 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:27.504528 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" event={"ID":"f12765e8-eacd-45c5-99b4-6c3db88f3198","Type":"ContainerStarted","Data":"e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575"} Apr 23 18:22:27.504565 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:27.504563 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" event={"ID":"f12765e8-eacd-45c5-99b4-6c3db88f3198","Type":"ContainerStarted","Data":"c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc"} Apr 23 18:22:27.504565 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:27.504572 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" event={"ID":"f12765e8-eacd-45c5-99b4-6c3db88f3198","Type":"ContainerStarted","Data":"35a4e72a62ea738db6c663fed1325bef266d5d982f0cc2a141225584ebfeb7fa"} Apr 23 18:22:27.504862 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:27.504634 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:27.525116 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:27.525064 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" podStartSLOduration=1.52504756 podStartE2EDuration="1.52504756s" podCreationTimestamp="2026-04-23 18:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:22:27.52316888 +0000 UTC m=+1811.310908139" watchObservedRunningTime="2026-04-23 18:22:27.52504756 +0000 UTC m=+1811.312786821" Apr 23 18:22:27.710916 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:27.710880 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.28:8643/healthz\": dial tcp 10.134.0.28:8643: connect: connection refused" Apr 23 18:22:27.716267 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:27.716235 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 23 18:22:28.507791 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:28.507758 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:28.509133 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:28.509099 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.30:8080: connect: connection refused" Apr 23 18:22:29.317232 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.317211 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:22:29.342767 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.342742 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/aeb4642d-f56a-442e-9c6e-e2354973f7cb-success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\") pod \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " Apr 23 18:22:29.342923 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.342780 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xv6z\" (UniqueName: \"kubernetes.io/projected/aeb4642d-f56a-442e-9c6e-e2354973f7cb-kube-api-access-5xv6z\") pod \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " Apr 23 18:22:29.342923 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.342825 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aeb4642d-f56a-442e-9c6e-e2354973f7cb-proxy-tls\") pod \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\" (UID: \"aeb4642d-f56a-442e-9c6e-e2354973f7cb\") " Apr 23 18:22:29.343137 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.343115 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb4642d-f56a-442e-9c6e-e2354973f7cb-success-200-isvc-5c6fb-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "success-200-isvc-5c6fb-kube-rbac-proxy-sar-config") pod "aeb4642d-f56a-442e-9c6e-e2354973f7cb" (UID: "aeb4642d-f56a-442e-9c6e-e2354973f7cb"). InnerVolumeSpecName "success-200-isvc-5c6fb-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:22:29.345018 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.344981 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb4642d-f56a-442e-9c6e-e2354973f7cb-kube-api-access-5xv6z" (OuterVolumeSpecName: "kube-api-access-5xv6z") pod "aeb4642d-f56a-442e-9c6e-e2354973f7cb" (UID: "aeb4642d-f56a-442e-9c6e-e2354973f7cb"). InnerVolumeSpecName "kube-api-access-5xv6z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:22:29.345256 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.345229 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb4642d-f56a-442e-9c6e-e2354973f7cb-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "aeb4642d-f56a-442e-9c6e-e2354973f7cb" (UID: "aeb4642d-f56a-442e-9c6e-e2354973f7cb"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:22:29.444011 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.443900 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aeb4642d-f56a-442e-9c6e-e2354973f7cb-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:22:29.444011 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.443941 2565 reconciler_common.go:299] "Volume detached for volume \"success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/aeb4642d-f56a-442e-9c6e-e2354973f7cb-success-200-isvc-5c6fb-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:22:29.444011 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.443974 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5xv6z\" (UniqueName: \"kubernetes.io/projected/aeb4642d-f56a-442e-9c6e-e2354973f7cb-kube-api-access-5xv6z\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:22:29.512279 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.512243 2565 generic.go:358] "Generic (PLEG): container finished" podID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerID="e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f" exitCode=0 Apr 23 18:22:29.512700 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.512327 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" Apr 23 18:22:29.512700 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.512330 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" event={"ID":"aeb4642d-f56a-442e-9c6e-e2354973f7cb","Type":"ContainerDied","Data":"e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f"} Apr 23 18:22:29.512700 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.512376 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs" event={"ID":"aeb4642d-f56a-442e-9c6e-e2354973f7cb","Type":"ContainerDied","Data":"75866fc0480b9bb3a82ff8129164def5a43f3e8d3633f5bd8051d74e103b4dd0"} Apr 23 18:22:29.512700 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.512402 2565 scope.go:117] "RemoveContainer" containerID="2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0" Apr 23 18:22:29.513044 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.513019 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.30:8080: connect: connection refused" Apr 23 18:22:29.520403 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.520386 2565 scope.go:117] "RemoveContainer" containerID="e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f" Apr 23 18:22:29.529837 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.529813 2565 scope.go:117] "RemoveContainer" containerID="2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0" Apr 23 18:22:29.530339 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:22:29.530319 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0\": container with ID starting with 2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0 not found: ID does not exist" containerID="2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0" Apr 23 18:22:29.530439 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.530347 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0"} err="failed to get container status \"2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0\": rpc error: code = NotFound desc = could not find container \"2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0\": container with ID starting with 2c1a6112e2f7c64458260cff05ee270a18d202a69d04850dfbeee935cb63c0c0 not found: ID does not exist" Apr 23 18:22:29.530439 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.530366 2565 scope.go:117] "RemoveContainer" containerID="e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f" Apr 23 18:22:29.530630 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:22:29.530612 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f\": container with ID starting with e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f not found: ID does not exist" containerID="e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f" Apr 23 18:22:29.530686 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.530638 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f"} err="failed to get container status \"e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f\": rpc error: code = NotFound desc = could not find container \"e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f\": container with ID starting with e0774fc4fadd9ce48be594ad65455c9d233c9d3758350de70fa21d884527820f not found: ID does not exist" Apr 23 18:22:29.534433 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.534411 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs"] Apr 23 18:22:29.542539 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:29.541543 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5c6fb-predictor-85d68895bd-htcgs"] Apr 23 18:22:30.911106 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:30.911070 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" path="/var/lib/kubelet/pods/aeb4642d-f56a-442e-9c6e-e2354973f7cb/volumes" Apr 23 18:22:34.517674 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:34.517649 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:22:34.518254 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:34.518227 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.30:8080: connect: connection refused" Apr 23 18:22:44.518474 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:44.518434 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.30:8080: connect: connection refused" Apr 23 18:22:54.518548 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:22:54.518502 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.30:8080: connect: connection refused" Apr 23 18:23:04.518625 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:04.518579 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.30:8080: connect: connection refused" Apr 23 18:23:14.518863 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:14.518826 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:23:15.773906 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.773873 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65"] Apr 23 18:23:15.774376 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.774260 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" containerID="cri-o://db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f" gracePeriod=30 Apr 23 18:23:15.774376 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.774300 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kube-rbac-proxy" containerID="cri-o://fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9" gracePeriod=30 Apr 23 18:23:15.811293 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.811264 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5"] Apr 23 18:23:15.811586 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.811574 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kube-rbac-proxy" Apr 23 18:23:15.811633 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.811587 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kube-rbac-proxy" Apr 23 18:23:15.811633 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.811605 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" Apr 23 18:23:15.811633 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.811610 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" Apr 23 18:23:15.811725 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.811669 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kserve-container" Apr 23 18:23:15.811725 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.811677 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="aeb4642d-f56a-442e-9c6e-e2354973f7cb" containerName="kube-rbac-proxy" Apr 23 18:23:15.814644 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.814622 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:15.817608 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.817557 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-87aee-predictor-serving-cert\"" Apr 23 18:23:15.817873 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.817852 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-87aee-kube-rbac-proxy-sar-config\"" Apr 23 18:23:15.828912 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.828862 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5"] Apr 23 18:23:15.830996 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.830299 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j68dm\" (UniqueName: \"kubernetes.io/projected/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-kube-api-access-j68dm\") pod \"success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:15.830996 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.830367 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-proxy-tls\") pod \"success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:15.830996 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.830480 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"success-200-isvc-87aee-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-success-200-isvc-87aee-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:15.931359 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.931324 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j68dm\" (UniqueName: \"kubernetes.io/projected/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-kube-api-access-j68dm\") pod \"success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:15.931549 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.931368 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-proxy-tls\") pod \"success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:15.931549 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.931396 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"success-200-isvc-87aee-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-success-200-isvc-87aee-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:15.931549 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:23:15.931528 2565 secret.go:189] Couldn't get secret kserve-ci-e2e-test/success-200-isvc-87aee-predictor-serving-cert: secret "success-200-isvc-87aee-predictor-serving-cert" not found Apr 23 18:23:15.931771 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:23:15.931629 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-proxy-tls podName:5799e9c5-f75b-4d91-b7b2-41b837c8f47a nodeName:}" failed. No retries permitted until 2026-04-23 18:23:16.431604616 +0000 UTC m=+1860.219343865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-proxy-tls") pod "success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" (UID: "5799e9c5-f75b-4d91-b7b2-41b837c8f47a") : secret "success-200-isvc-87aee-predictor-serving-cert" not found Apr 23 18:23:15.932061 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.932042 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"success-200-isvc-87aee-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-success-200-isvc-87aee-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:15.942543 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:15.942520 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j68dm\" (UniqueName: \"kubernetes.io/projected/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-kube-api-access-j68dm\") pod \"success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:16.435851 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:16.435816 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-proxy-tls\") pod \"success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:16.438283 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:16.438252 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-proxy-tls\") pod \"success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:16.671952 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:16.671916 2565 generic.go:358] "Generic (PLEG): container finished" podID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerID="fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9" exitCode=2 Apr 23 18:23:16.672137 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:16.671986 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" event={"ID":"3f993db0-82b6-4d8a-ac9d-056859ab0898","Type":"ContainerDied","Data":"fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9"} Apr 23 18:23:16.727074 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:16.726990 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:16.848417 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:16.848386 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5"] Apr 23 18:23:16.852046 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:23:16.852019 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5799e9c5_f75b_4d91_b7b2_41b837c8f47a.slice/crio-9a72751194221bdea828cac8136097ed6cb31d3c73ce9577818c5a687b945ad1 WatchSource:0}: Error finding container 9a72751194221bdea828cac8136097ed6cb31d3c73ce9577818c5a687b945ad1: Status 404 returned error can't find the container with id 9a72751194221bdea828cac8136097ed6cb31d3c73ce9577818c5a687b945ad1 Apr 23 18:23:17.676377 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:17.676339 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" event={"ID":"5799e9c5-f75b-4d91-b7b2-41b837c8f47a","Type":"ContainerStarted","Data":"c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8"} Apr 23 18:23:17.676377 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:17.676378 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" event={"ID":"5799e9c5-f75b-4d91-b7b2-41b837c8f47a","Type":"ContainerStarted","Data":"352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b"} Apr 23 18:23:17.676588 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:17.676389 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" event={"ID":"5799e9c5-f75b-4d91-b7b2-41b837c8f47a","Type":"ContainerStarted","Data":"9a72751194221bdea828cac8136097ed6cb31d3c73ce9577818c5a687b945ad1"} Apr 23 18:23:17.676588 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:17.676433 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:17.696797 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:17.696752 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" podStartSLOduration=2.69673886 podStartE2EDuration="2.69673886s" podCreationTimestamp="2026-04-23 18:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:23:17.696004443 +0000 UTC m=+1861.483743705" watchObservedRunningTime="2026-04-23 18:23:17.69673886 +0000 UTC m=+1861.484478119" Apr 23 18:23:18.679471 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:18.679433 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:18.680728 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:18.680702 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 23 18:23:18.879308 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:18.879267 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.29:8643/healthz\": dial tcp 10.134.0.29:8643: connect: connection refused" Apr 23 18:23:18.884608 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:18.884568 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 23 18:23:19.016925 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.016900 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:23:19.159364 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.159325 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsc6q\" (UniqueName: \"kubernetes.io/projected/3f993db0-82b6-4d8a-ac9d-056859ab0898-kube-api-access-hsc6q\") pod \"3f993db0-82b6-4d8a-ac9d-056859ab0898\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " Apr 23 18:23:19.159529 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.159372 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f993db0-82b6-4d8a-ac9d-056859ab0898-proxy-tls\") pod \"3f993db0-82b6-4d8a-ac9d-056859ab0898\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " Apr 23 18:23:19.159529 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.159408 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"success-200-isvc-e1fac-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/3f993db0-82b6-4d8a-ac9d-056859ab0898-success-200-isvc-e1fac-kube-rbac-proxy-sar-config\") pod \"3f993db0-82b6-4d8a-ac9d-056859ab0898\" (UID: \"3f993db0-82b6-4d8a-ac9d-056859ab0898\") " Apr 23 18:23:19.159856 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.159821 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f993db0-82b6-4d8a-ac9d-056859ab0898-success-200-isvc-e1fac-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "success-200-isvc-e1fac-kube-rbac-proxy-sar-config") pod "3f993db0-82b6-4d8a-ac9d-056859ab0898" (UID: "3f993db0-82b6-4d8a-ac9d-056859ab0898"). InnerVolumeSpecName "success-200-isvc-e1fac-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:23:19.161405 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.161375 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f993db0-82b6-4d8a-ac9d-056859ab0898-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "3f993db0-82b6-4d8a-ac9d-056859ab0898" (UID: "3f993db0-82b6-4d8a-ac9d-056859ab0898"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:23:19.161519 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.161460 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f993db0-82b6-4d8a-ac9d-056859ab0898-kube-api-access-hsc6q" (OuterVolumeSpecName: "kube-api-access-hsc6q") pod "3f993db0-82b6-4d8a-ac9d-056859ab0898" (UID: "3f993db0-82b6-4d8a-ac9d-056859ab0898"). InnerVolumeSpecName "kube-api-access-hsc6q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:23:19.260192 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.260143 2565 reconciler_common.go:299] "Volume detached for volume \"success-200-isvc-e1fac-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/3f993db0-82b6-4d8a-ac9d-056859ab0898-success-200-isvc-e1fac-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:23:19.260192 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.260192 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hsc6q\" (UniqueName: \"kubernetes.io/projected/3f993db0-82b6-4d8a-ac9d-056859ab0898-kube-api-access-hsc6q\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:23:19.260397 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.260209 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f993db0-82b6-4d8a-ac9d-056859ab0898-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:23:19.683607 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.683526 2565 generic.go:358] "Generic (PLEG): container finished" podID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerID="db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f" exitCode=0 Apr 23 18:23:19.684035 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.683606 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" Apr 23 18:23:19.684035 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.683614 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" event={"ID":"3f993db0-82b6-4d8a-ac9d-056859ab0898","Type":"ContainerDied","Data":"db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f"} Apr 23 18:23:19.684035 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.683655 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65" event={"ID":"3f993db0-82b6-4d8a-ac9d-056859ab0898","Type":"ContainerDied","Data":"9dbb5c993829a7e6dad0d1c5297fcb91370f32cf5cc4e65ed593093eb7c77945"} Apr 23 18:23:19.684035 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.683674 2565 scope.go:117] "RemoveContainer" containerID="fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9" Apr 23 18:23:19.684381 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.684350 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 23 18:23:19.692160 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.692138 2565 scope.go:117] "RemoveContainer" containerID="db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f" Apr 23 18:23:19.699440 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.699424 2565 scope.go:117] "RemoveContainer" containerID="fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9" Apr 23 18:23:19.699660 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:23:19.699644 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9\": container with ID starting with fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9 not found: ID does not exist" containerID="fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9" Apr 23 18:23:19.699706 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.699668 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9"} err="failed to get container status \"fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9\": rpc error: code = NotFound desc = could not find container \"fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9\": container with ID starting with fa8a38855823703c93a9de5c50cf9988be337a73f248999b149bc138ffa98be9 not found: ID does not exist" Apr 23 18:23:19.699706 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.699683 2565 scope.go:117] "RemoveContainer" containerID="db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f" Apr 23 18:23:19.699901 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:23:19.699883 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f\": container with ID starting with db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f not found: ID does not exist" containerID="db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f" Apr 23 18:23:19.699941 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.699910 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f"} err="failed to get container status \"db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f\": rpc error: code = NotFound desc = could not find container \"db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f\": container with ID starting with db22da9201aaca8481a3cd9bcb589158d89ec1838b44e8cb0e712f7d179c994f not found: ID does not exist" Apr 23 18:23:19.708144 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.708122 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65"] Apr 23 18:23:19.712207 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:19.712181 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-e1fac-predictor-b698b876d-t4s65"] Apr 23 18:23:20.911602 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:20.911565 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" path="/var/lib/kubelet/pods/3f993db0-82b6-4d8a-ac9d-056859ab0898/volumes" Apr 23 18:23:24.687707 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:24.687677 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:23:24.688215 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:24.688183 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 23 18:23:34.688603 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:34.688511 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 23 18:23:36.279983 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.279925 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq"] Apr 23 18:23:36.280407 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.280224 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kserve-container" containerID="cri-o://c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc" gracePeriod=30 Apr 23 18:23:36.280407 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.280248 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kube-rbac-proxy" containerID="cri-o://e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575" gracePeriod=30 Apr 23 18:23:36.297351 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.297326 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx"] Apr 23 18:23:36.297664 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.297652 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" Apr 23 18:23:36.297707 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.297666 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" Apr 23 18:23:36.297707 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.297681 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kube-rbac-proxy" Apr 23 18:23:36.297707 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.297686 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kube-rbac-proxy" Apr 23 18:23:36.297811 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.297727 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kube-rbac-proxy" Apr 23 18:23:36.297811 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.297735 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f993db0-82b6-4d8a-ac9d-056859ab0898" containerName="kserve-container" Apr 23 18:23:36.301915 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.301899 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:36.304131 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.304114 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-7efef-predictor-serving-cert\"" Apr 23 18:23:36.304187 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.304122 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"success-200-isvc-7efef-kube-rbac-proxy-sar-config\"" Apr 23 18:23:36.309166 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.309140 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx"] Apr 23 18:23:36.390005 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.389977 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v72tb\" (UniqueName: \"kubernetes.io/projected/4bfc3557-c73f-4baf-b97e-8f69d925fafc-kube-api-access-v72tb\") pod \"success-200-isvc-7efef-predictor-7d574cb67d-sqqwx\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:36.390159 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.390110 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bfc3557-c73f-4baf-b97e-8f69d925fafc-proxy-tls\") pod \"success-200-isvc-7efef-predictor-7d574cb67d-sqqwx\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:36.390159 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.390145 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"success-200-isvc-7efef-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4bfc3557-c73f-4baf-b97e-8f69d925fafc-success-200-isvc-7efef-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-7efef-predictor-7d574cb67d-sqqwx\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:36.491522 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.491491 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bfc3557-c73f-4baf-b97e-8f69d925fafc-proxy-tls\") pod \"success-200-isvc-7efef-predictor-7d574cb67d-sqqwx\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:36.491522 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.491527 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"success-200-isvc-7efef-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4bfc3557-c73f-4baf-b97e-8f69d925fafc-success-200-isvc-7efef-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-7efef-predictor-7d574cb67d-sqqwx\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:36.491717 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:23:36.491654 2565 secret.go:189] Couldn't get secret kserve-ci-e2e-test/success-200-isvc-7efef-predictor-serving-cert: secret "success-200-isvc-7efef-predictor-serving-cert" not found Apr 23 18:23:36.491759 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.491658 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v72tb\" (UniqueName: \"kubernetes.io/projected/4bfc3557-c73f-4baf-b97e-8f69d925fafc-kube-api-access-v72tb\") pod \"success-200-isvc-7efef-predictor-7d574cb67d-sqqwx\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:36.491759 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:23:36.491727 2565 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bfc3557-c73f-4baf-b97e-8f69d925fafc-proxy-tls podName:4bfc3557-c73f-4baf-b97e-8f69d925fafc nodeName:}" failed. No retries permitted until 2026-04-23 18:23:36.99170561 +0000 UTC m=+1880.779444867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4bfc3557-c73f-4baf-b97e-8f69d925fafc-proxy-tls") pod "success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" (UID: "4bfc3557-c73f-4baf-b97e-8f69d925fafc") : secret "success-200-isvc-7efef-predictor-serving-cert" not found Apr 23 18:23:36.492300 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.492281 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"success-200-isvc-7efef-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4bfc3557-c73f-4baf-b97e-8f69d925fafc-success-200-isvc-7efef-kube-rbac-proxy-sar-config\") pod \"success-200-isvc-7efef-predictor-7d574cb67d-sqqwx\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:36.500712 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.500689 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v72tb\" (UniqueName: \"kubernetes.io/projected/4bfc3557-c73f-4baf-b97e-8f69d925fafc-kube-api-access-v72tb\") pod \"success-200-isvc-7efef-predictor-7d574cb67d-sqqwx\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:36.740663 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.740631 2565 generic.go:358] "Generic (PLEG): container finished" podID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerID="e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575" exitCode=2 Apr 23 18:23:36.740820 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.740709 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" event={"ID":"f12765e8-eacd-45c5-99b4-6c3db88f3198","Type":"ContainerDied","Data":"e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575"} Apr 23 18:23:36.995503 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.995421 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bfc3557-c73f-4baf-b97e-8f69d925fafc-proxy-tls\") pod \"success-200-isvc-7efef-predictor-7d574cb67d-sqqwx\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:36.997730 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:36.997707 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bfc3557-c73f-4baf-b97e-8f69d925fafc-proxy-tls\") pod \"success-200-isvc-7efef-predictor-7d574cb67d-sqqwx\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:37.213570 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:37.213537 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:37.334659 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:37.334634 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx"] Apr 23 18:23:37.336728 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:23:37.336702 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bfc3557_c73f_4baf_b97e_8f69d925fafc.slice/crio-0f782121b0f3f112dfd4627b273bfb42027fc330491ba8baed420c16d5ee89ec WatchSource:0}: Error finding container 0f782121b0f3f112dfd4627b273bfb42027fc330491ba8baed420c16d5ee89ec: Status 404 returned error can't find the container with id 0f782121b0f3f112dfd4627b273bfb42027fc330491ba8baed420c16d5ee89ec Apr 23 18:23:37.745334 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:37.745296 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" event={"ID":"4bfc3557-c73f-4baf-b97e-8f69d925fafc","Type":"ContainerStarted","Data":"448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357"} Apr 23 18:23:37.745334 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:37.745333 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" event={"ID":"4bfc3557-c73f-4baf-b97e-8f69d925fafc","Type":"ContainerStarted","Data":"e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971"} Apr 23 18:23:37.745530 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:37.745345 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" event={"ID":"4bfc3557-c73f-4baf-b97e-8f69d925fafc","Type":"ContainerStarted","Data":"0f782121b0f3f112dfd4627b273bfb42027fc330491ba8baed420c16d5ee89ec"} Apr 23 18:23:37.745530 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:37.745486 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:37.764452 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:37.764403 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podStartSLOduration=1.764388595 podStartE2EDuration="1.764388595s" podCreationTimestamp="2026-04-23 18:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:23:37.762980094 +0000 UTC m=+1881.550719354" watchObservedRunningTime="2026-04-23 18:23:37.764388595 +0000 UTC m=+1881.552127856" Apr 23 18:23:38.748591 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:38.748560 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:38.749913 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:38.749885 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.32:8080: connect: connection refused" Apr 23 18:23:39.530268 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.530239 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:23:39.617020 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.616901 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"success-200-isvc-a93f3-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/f12765e8-eacd-45c5-99b4-6c3db88f3198-success-200-isvc-a93f3-kube-rbac-proxy-sar-config\") pod \"f12765e8-eacd-45c5-99b4-6c3db88f3198\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " Apr 23 18:23:39.617020 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.616998 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqx5k\" (UniqueName: \"kubernetes.io/projected/f12765e8-eacd-45c5-99b4-6c3db88f3198-kube-api-access-sqx5k\") pod \"f12765e8-eacd-45c5-99b4-6c3db88f3198\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " Apr 23 18:23:39.617260 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.617032 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f12765e8-eacd-45c5-99b4-6c3db88f3198-proxy-tls\") pod \"f12765e8-eacd-45c5-99b4-6c3db88f3198\" (UID: \"f12765e8-eacd-45c5-99b4-6c3db88f3198\") " Apr 23 18:23:39.617322 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.617293 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f12765e8-eacd-45c5-99b4-6c3db88f3198-success-200-isvc-a93f3-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "success-200-isvc-a93f3-kube-rbac-proxy-sar-config") pod "f12765e8-eacd-45c5-99b4-6c3db88f3198" (UID: "f12765e8-eacd-45c5-99b4-6c3db88f3198"). InnerVolumeSpecName "success-200-isvc-a93f3-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:23:39.619268 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.619243 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f12765e8-eacd-45c5-99b4-6c3db88f3198-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "f12765e8-eacd-45c5-99b4-6c3db88f3198" (UID: "f12765e8-eacd-45c5-99b4-6c3db88f3198"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:23:39.619359 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.619266 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f12765e8-eacd-45c5-99b4-6c3db88f3198-kube-api-access-sqx5k" (OuterVolumeSpecName: "kube-api-access-sqx5k") pod "f12765e8-eacd-45c5-99b4-6c3db88f3198" (UID: "f12765e8-eacd-45c5-99b4-6c3db88f3198"). InnerVolumeSpecName "kube-api-access-sqx5k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:23:39.718527 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.718491 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sqx5k\" (UniqueName: \"kubernetes.io/projected/f12765e8-eacd-45c5-99b4-6c3db88f3198-kube-api-access-sqx5k\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:23:39.718527 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.718525 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f12765e8-eacd-45c5-99b4-6c3db88f3198-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:23:39.718527 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.718538 2565 reconciler_common.go:299] "Volume detached for volume \"success-200-isvc-a93f3-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/f12765e8-eacd-45c5-99b4-6c3db88f3198-success-200-isvc-a93f3-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:23:39.752899 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.752859 2565 generic.go:358] "Generic (PLEG): container finished" podID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerID="c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc" exitCode=0 Apr 23 18:23:39.753376 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.752942 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" event={"ID":"f12765e8-eacd-45c5-99b4-6c3db88f3198","Type":"ContainerDied","Data":"c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc"} Apr 23 18:23:39.753376 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.752951 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" Apr 23 18:23:39.753376 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.752992 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" event={"ID":"f12765e8-eacd-45c5-99b4-6c3db88f3198","Type":"ContainerDied","Data":"35a4e72a62ea738db6c663fed1325bef266d5d982f0cc2a141225584ebfeb7fa"} Apr 23 18:23:39.753376 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.753012 2565 scope.go:117] "RemoveContainer" containerID="e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575" Apr 23 18:23:39.753723 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.753683 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.32:8080: connect: connection refused" Apr 23 18:23:39.761458 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.761439 2565 scope.go:117] "RemoveContainer" containerID="c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc" Apr 23 18:23:39.768465 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.768450 2565 scope.go:117] "RemoveContainer" containerID="e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575" Apr 23 18:23:39.768723 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:23:39.768706 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575\": container with ID starting with e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575 not found: ID does not exist" containerID="e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575" Apr 23 18:23:39.768768 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.768733 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575"} err="failed to get container status \"e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575\": rpc error: code = NotFound desc = could not find container \"e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575\": container with ID starting with e625068db842474bbff0ca3f856ed5fa7a587a3b7cc2b250849ebf04856ef575 not found: ID does not exist" Apr 23 18:23:39.768768 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.768750 2565 scope.go:117] "RemoveContainer" containerID="c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc" Apr 23 18:23:39.769001 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:23:39.768980 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc\": container with ID starting with c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc not found: ID does not exist" containerID="c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc" Apr 23 18:23:39.769050 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.769007 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc"} err="failed to get container status \"c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc\": rpc error: code = NotFound desc = could not find container \"c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc\": container with ID starting with c6ede4ad1e2d3973239f318a2fb4ecb281bde95d2e39b16b00a5427896269fbc not found: ID does not exist" Apr 23 18:23:39.779844 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.779818 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq"] Apr 23 18:23:39.782169 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:39.782147 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq"] Apr 23 18:23:40.514723 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:40.514663 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-a93f3-predictor-5c444dc64-lkwsq" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.30:8643/healthz\": context deadline exceeded" Apr 23 18:23:40.911383 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:40.911297 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" path="/var/lib/kubelet/pods/f12765e8-eacd-45c5-99b4-6c3db88f3198/volumes" Apr 23 18:23:44.688349 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:44.688308 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 23 18:23:44.758244 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:44.758214 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:23:44.758635 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:44.758607 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.32:8080: connect: connection refused" Apr 23 18:23:54.688572 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:54.688521 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 23 18:23:54.758982 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:23:54.758927 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.32:8080: connect: connection refused" Apr 23 18:24:04.689122 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:24:04.689085 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:24:04.759264 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:24:04.759226 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.32:8080: connect: connection refused" Apr 23 18:24:14.759693 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:24:14.759651 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.32:8080: connect: connection refused" Apr 23 18:24:24.759182 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:24:24.759146 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:27:16.918722 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:27:16.918688 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:27:16.920206 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:27:16.920189 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:32:16.938851 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:16.938815 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:32:16.941391 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:16.940977 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:32:51.232271 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:51.232180 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx"] Apr 23 18:32:51.518194 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:51.518134 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" containerID="cri-o://e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971" gracePeriod=30 Apr 23 18:32:51.518194 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:51.518175 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kube-rbac-proxy" containerID="cri-o://448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357" gracePeriod=30 Apr 23 18:32:52.522949 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:52.522912 2565 generic.go:358] "Generic (PLEG): container finished" podID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerID="448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357" exitCode=2 Apr 23 18:32:52.523335 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:52.522989 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" event={"ID":"4bfc3557-c73f-4baf-b97e-8f69d925fafc","Type":"ContainerDied","Data":"448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357"} Apr 23 18:32:54.753483 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:54.753429 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.32:8643/healthz\": dial tcp 10.134.0.32:8643: connect: connection refused" Apr 23 18:32:54.759547 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:54.759509 2565 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.32:8080: connect: connection refused" Apr 23 18:32:54.857071 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:54.857044 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:32:55.026796 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.026766 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bfc3557-c73f-4baf-b97e-8f69d925fafc-proxy-tls\") pod \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " Apr 23 18:32:55.026943 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.026815 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"success-200-isvc-7efef-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4bfc3557-c73f-4baf-b97e-8f69d925fafc-success-200-isvc-7efef-kube-rbac-proxy-sar-config\") pod \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " Apr 23 18:32:55.026943 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.026879 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v72tb\" (UniqueName: \"kubernetes.io/projected/4bfc3557-c73f-4baf-b97e-8f69d925fafc-kube-api-access-v72tb\") pod \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\" (UID: \"4bfc3557-c73f-4baf-b97e-8f69d925fafc\") " Apr 23 18:32:55.027234 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.027201 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bfc3557-c73f-4baf-b97e-8f69d925fafc-success-200-isvc-7efef-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "success-200-isvc-7efef-kube-rbac-proxy-sar-config") pod "4bfc3557-c73f-4baf-b97e-8f69d925fafc" (UID: "4bfc3557-c73f-4baf-b97e-8f69d925fafc"). InnerVolumeSpecName "success-200-isvc-7efef-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:32:55.028895 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.028872 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bfc3557-c73f-4baf-b97e-8f69d925fafc-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "4bfc3557-c73f-4baf-b97e-8f69d925fafc" (UID: "4bfc3557-c73f-4baf-b97e-8f69d925fafc"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:32:55.029063 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.028950 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfc3557-c73f-4baf-b97e-8f69d925fafc-kube-api-access-v72tb" (OuterVolumeSpecName: "kube-api-access-v72tb") pod "4bfc3557-c73f-4baf-b97e-8f69d925fafc" (UID: "4bfc3557-c73f-4baf-b97e-8f69d925fafc"). InnerVolumeSpecName "kube-api-access-v72tb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:32:55.133501 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.129126 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v72tb\" (UniqueName: \"kubernetes.io/projected/4bfc3557-c73f-4baf-b97e-8f69d925fafc-kube-api-access-v72tb\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:32:55.133501 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.129215 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bfc3557-c73f-4baf-b97e-8f69d925fafc-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:32:55.133501 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.129257 2565 reconciler_common.go:299] "Volume detached for volume \"success-200-isvc-7efef-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4bfc3557-c73f-4baf-b97e-8f69d925fafc-success-200-isvc-7efef-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:32:55.534924 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.534882 2565 generic.go:358] "Generic (PLEG): container finished" podID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerID="e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971" exitCode=0 Apr 23 18:32:55.535146 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.534980 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" event={"ID":"4bfc3557-c73f-4baf-b97e-8f69d925fafc","Type":"ContainerDied","Data":"e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971"} Apr 23 18:32:55.535146 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.535016 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" Apr 23 18:32:55.535146 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.535031 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx" event={"ID":"4bfc3557-c73f-4baf-b97e-8f69d925fafc","Type":"ContainerDied","Data":"0f782121b0f3f112dfd4627b273bfb42027fc330491ba8baed420c16d5ee89ec"} Apr 23 18:32:55.535146 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.535052 2565 scope.go:117] "RemoveContainer" containerID="448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357" Apr 23 18:32:55.543664 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.543647 2565 scope.go:117] "RemoveContainer" containerID="e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971" Apr 23 18:32:55.550847 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.550825 2565 scope.go:117] "RemoveContainer" containerID="448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357" Apr 23 18:32:55.551118 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:32:55.551098 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357\": container with ID starting with 448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357 not found: ID does not exist" containerID="448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357" Apr 23 18:32:55.551167 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.551127 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357"} err="failed to get container status \"448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357\": rpc error: code = NotFound desc = could not find container \"448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357\": container with ID starting with 448470b385657bdd5b2cb751018ca771972ac49ef122dfc730f6e3444f04d357 not found: ID does not exist" Apr 23 18:32:55.551167 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.551148 2565 scope.go:117] "RemoveContainer" containerID="e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971" Apr 23 18:32:55.551396 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:32:55.551376 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971\": container with ID starting with e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971 not found: ID does not exist" containerID="e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971" Apr 23 18:32:55.551456 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.551405 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971"} err="failed to get container status \"e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971\": rpc error: code = NotFound desc = could not find container \"e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971\": container with ID starting with e169191bdd5fbb46575b6381f866246f3e9a3ac1c85ada17049a8a6570876971 not found: ID does not exist" Apr 23 18:32:55.558791 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.558768 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx"] Apr 23 18:32:55.562547 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:55.562526 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-7efef-predictor-7d574cb67d-sqqwx"] Apr 23 18:32:56.910975 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:32:56.910936 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" path="/var/lib/kubelet/pods/4bfc3557-c73f-4baf-b97e-8f69d925fafc/volumes" Apr 23 18:37:16.958780 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:37:16.958661 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:37:16.963096 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:37:16.962144 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-131-177.ec2.internal_7e3c596a27faede1f97b6bb0972592f6/kube-rbac-proxy-crio/4.log" Apr 23 18:40:35.187411 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:35.187335 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5"] Apr 23 18:40:35.187968 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:35.187713 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kserve-container" containerID="cri-o://352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b" gracePeriod=30 Apr 23 18:40:35.187968 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:35.187740 2565 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kube-rbac-proxy" containerID="cri-o://c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8" gracePeriod=30 Apr 23 18:40:35.857110 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:35.857077 2565 generic.go:358] "Generic (PLEG): container finished" podID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerID="c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8" exitCode=2 Apr 23 18:40:35.857284 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:35.857133 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" event={"ID":"5799e9c5-f75b-4d91-b7b2-41b837c8f47a","Type":"ContainerDied","Data":"c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8"} Apr 23 18:40:38.229115 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.229093 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:40:38.343462 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.343376 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-proxy-tls\") pod \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " Apr 23 18:40:38.343462 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.343429 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"success-200-isvc-87aee-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-success-200-isvc-87aee-kube-rbac-proxy-sar-config\") pod \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " Apr 23 18:40:38.343687 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.343497 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j68dm\" (UniqueName: \"kubernetes.io/projected/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-kube-api-access-j68dm\") pod \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\" (UID: \"5799e9c5-f75b-4d91-b7b2-41b837c8f47a\") " Apr 23 18:40:38.343830 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.343803 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-success-200-isvc-87aee-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "success-200-isvc-87aee-kube-rbac-proxy-sar-config") pod "5799e9c5-f75b-4d91-b7b2-41b837c8f47a" (UID: "5799e9c5-f75b-4d91-b7b2-41b837c8f47a"). InnerVolumeSpecName "success-200-isvc-87aee-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:40:38.345524 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.345500 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-kube-api-access-j68dm" (OuterVolumeSpecName: "kube-api-access-j68dm") pod "5799e9c5-f75b-4d91-b7b2-41b837c8f47a" (UID: "5799e9c5-f75b-4d91-b7b2-41b837c8f47a"). InnerVolumeSpecName "kube-api-access-j68dm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:40:38.345612 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.345524 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "5799e9c5-f75b-4d91-b7b2-41b837c8f47a" (UID: "5799e9c5-f75b-4d91-b7b2-41b837c8f47a"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:40:38.444718 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.444685 2565 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-proxy-tls\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:40:38.444718 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.444713 2565 reconciler_common.go:299] "Volume detached for volume \"success-200-isvc-87aee-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-success-200-isvc-87aee-kube-rbac-proxy-sar-config\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:40:38.444888 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.444728 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j68dm\" (UniqueName: \"kubernetes.io/projected/5799e9c5-f75b-4d91-b7b2-41b837c8f47a-kube-api-access-j68dm\") on node \"ip-10-0-131-177.ec2.internal\" DevicePath \"\"" Apr 23 18:40:38.868308 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.868279 2565 generic.go:358] "Generic (PLEG): container finished" podID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerID="352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b" exitCode=0 Apr 23 18:40:38.868449 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.868352 2565 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" Apr 23 18:40:38.868449 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.868360 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" event={"ID":"5799e9c5-f75b-4d91-b7b2-41b837c8f47a","Type":"ContainerDied","Data":"352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b"} Apr 23 18:40:38.868449 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.868397 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5" event={"ID":"5799e9c5-f75b-4d91-b7b2-41b837c8f47a","Type":"ContainerDied","Data":"9a72751194221bdea828cac8136097ed6cb31d3c73ce9577818c5a687b945ad1"} Apr 23 18:40:38.868449 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.868412 2565 scope.go:117] "RemoveContainer" containerID="c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8" Apr 23 18:40:38.877426 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.877208 2565 scope.go:117] "RemoveContainer" containerID="352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b" Apr 23 18:40:38.884027 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.884009 2565 scope.go:117] "RemoveContainer" containerID="c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8" Apr 23 18:40:38.884282 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:40:38.884261 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8\": container with ID starting with c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8 not found: ID does not exist" containerID="c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8" Apr 23 18:40:38.884361 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.884294 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8"} err="failed to get container status \"c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8\": rpc error: code = NotFound desc = could not find container \"c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8\": container with ID starting with c40d042ea61f45ac0eb8c2691b530094baf2ca06617d23e027680430d7e61ca8 not found: ID does not exist" Apr 23 18:40:38.884361 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.884317 2565 scope.go:117] "RemoveContainer" containerID="352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b" Apr 23 18:40:38.884584 ip-10-0-131-177 kubenswrapper[2565]: E0423 18:40:38.884558 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b\": container with ID starting with 352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b not found: ID does not exist" containerID="352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b" Apr 23 18:40:38.884667 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.884590 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b"} err="failed to get container status \"352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b\": rpc error: code = NotFound desc = could not find container \"352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b\": container with ID starting with 352ea2af801b3798578483d7b494770e255527bb03aff704c43ad1a3437ff04b not found: ID does not exist" Apr 23 18:40:38.892495 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.892474 2565 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5"] Apr 23 18:40:38.901918 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.901900 2565 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-87aee-predictor-5f8bdb75dd-m72f5"] Apr 23 18:40:38.910069 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:40:38.910049 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" path="/var/lib/kubelet/pods/5799e9c5-f75b-4d91-b7b2-41b837c8f47a/volumes" Apr 23 18:41:03.183864 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.183828 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hqhtf/must-gather-2vzcm"] Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184146 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kserve-container" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184158 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kserve-container" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184169 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kube-rbac-proxy" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184175 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kube-rbac-proxy" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184183 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kube-rbac-proxy" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184188 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kube-rbac-proxy" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184198 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184203 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184209 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kube-rbac-proxy" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184215 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kube-rbac-proxy" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184224 2565 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kserve-container" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184229 2565 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kserve-container" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184272 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kserve-container" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184282 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="5799e9c5-f75b-4d91-b7b2-41b837c8f47a" containerName="kube-rbac-proxy" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184288 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kube-rbac-proxy" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184296 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kserve-container" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184302 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="4bfc3557-c73f-4baf-b97e-8f69d925fafc" containerName="kserve-container" Apr 23 18:41:03.184323 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.184309 2565 memory_manager.go:356] "RemoveStaleState removing state" podUID="f12765e8-eacd-45c5-99b4-6c3db88f3198" containerName="kube-rbac-proxy" Apr 23 18:41:03.187215 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.187200 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqhtf/must-gather-2vzcm" Apr 23 18:41:03.189441 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.189414 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-hqhtf\"/\"openshift-service-ca.crt\"" Apr 23 18:41:03.189577 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.189448 2565 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-hqhtf\"/\"kube-root-ca.crt\"" Apr 23 18:41:03.189577 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.189481 2565 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-hqhtf\"/\"default-dockercfg-7jg68\"" Apr 23 18:41:03.195739 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.195719 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hqhtf/must-gather-2vzcm"] Apr 23 18:41:03.239988 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.239949 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6tln\" (UniqueName: \"kubernetes.io/projected/589c90f4-ccc5-4ae9-b1ab-82ac7aa68051-kube-api-access-r6tln\") pod \"must-gather-2vzcm\" (UID: \"589c90f4-ccc5-4ae9-b1ab-82ac7aa68051\") " pod="openshift-must-gather-hqhtf/must-gather-2vzcm" Apr 23 18:41:03.240094 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.240004 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/589c90f4-ccc5-4ae9-b1ab-82ac7aa68051-must-gather-output\") pod \"must-gather-2vzcm\" (UID: \"589c90f4-ccc5-4ae9-b1ab-82ac7aa68051\") " pod="openshift-must-gather-hqhtf/must-gather-2vzcm" Apr 23 18:41:03.340811 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.340782 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r6tln\" (UniqueName: \"kubernetes.io/projected/589c90f4-ccc5-4ae9-b1ab-82ac7aa68051-kube-api-access-r6tln\") pod \"must-gather-2vzcm\" (UID: \"589c90f4-ccc5-4ae9-b1ab-82ac7aa68051\") " pod="openshift-must-gather-hqhtf/must-gather-2vzcm" Apr 23 18:41:03.340927 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.340827 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/589c90f4-ccc5-4ae9-b1ab-82ac7aa68051-must-gather-output\") pod \"must-gather-2vzcm\" (UID: \"589c90f4-ccc5-4ae9-b1ab-82ac7aa68051\") " pod="openshift-must-gather-hqhtf/must-gather-2vzcm" Apr 23 18:41:03.341114 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.341100 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/589c90f4-ccc5-4ae9-b1ab-82ac7aa68051-must-gather-output\") pod \"must-gather-2vzcm\" (UID: \"589c90f4-ccc5-4ae9-b1ab-82ac7aa68051\") " pod="openshift-must-gather-hqhtf/must-gather-2vzcm" Apr 23 18:41:03.349830 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.349809 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6tln\" (UniqueName: \"kubernetes.io/projected/589c90f4-ccc5-4ae9-b1ab-82ac7aa68051-kube-api-access-r6tln\") pod \"must-gather-2vzcm\" (UID: \"589c90f4-ccc5-4ae9-b1ab-82ac7aa68051\") " pod="openshift-must-gather-hqhtf/must-gather-2vzcm" Apr 23 18:41:03.505529 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.505499 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqhtf/must-gather-2vzcm" Apr 23 18:41:03.623661 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.623640 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hqhtf/must-gather-2vzcm"] Apr 23 18:41:03.626189 ip-10-0-131-177 kubenswrapper[2565]: W0423 18:41:03.626146 2565 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod589c90f4_ccc5_4ae9_b1ab_82ac7aa68051.slice/crio-b42ff4fe3344ed46eae83f245d34e434785944562dce48aaa99040dbf9833e80 WatchSource:0}: Error finding container b42ff4fe3344ed46eae83f245d34e434785944562dce48aaa99040dbf9833e80: Status 404 returned error can't find the container with id b42ff4fe3344ed46eae83f245d34e434785944562dce48aaa99040dbf9833e80 Apr 23 18:41:03.627830 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.627815 2565 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:41:03.936825 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:03.936747 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqhtf/must-gather-2vzcm" event={"ID":"589c90f4-ccc5-4ae9-b1ab-82ac7aa68051","Type":"ContainerStarted","Data":"b42ff4fe3344ed46eae83f245d34e434785944562dce48aaa99040dbf9833e80"} Apr 23 18:41:04.943069 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:04.943032 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqhtf/must-gather-2vzcm" event={"ID":"589c90f4-ccc5-4ae9-b1ab-82ac7aa68051","Type":"ContainerStarted","Data":"d0484029372b3ec2a1c5c4369006b277beb0fc24485e4359c5f7ca96269b25aa"} Apr 23 18:41:04.943069 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:04.943074 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqhtf/must-gather-2vzcm" event={"ID":"589c90f4-ccc5-4ae9-b1ab-82ac7aa68051","Type":"ContainerStarted","Data":"2e6bca63e9b3fdde438039ebb1541bc1b12e1f9e3daf42ae76f5ffe003409513"} Apr 23 18:41:04.959386 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:04.959332 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hqhtf/must-gather-2vzcm" podStartSLOduration=1.138781923 podStartE2EDuration="1.959311972s" podCreationTimestamp="2026-04-23 18:41:03 +0000 UTC" firstStartedPulling="2026-04-23 18:41:03.627970547 +0000 UTC m=+2927.415709786" lastFinishedPulling="2026-04-23 18:41:04.448500585 +0000 UTC m=+2928.236239835" observedRunningTime="2026-04-23 18:41:04.957873292 +0000 UTC m=+2928.745612553" watchObservedRunningTime="2026-04-23 18:41:04.959311972 +0000 UTC m=+2928.747051233" Apr 23 18:41:05.955363 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:05.955325 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-9765b_9743455b-0c0a-49c2-9dd5-2e99d372e2c4/global-pull-secret-syncer/0.log" Apr 23 18:41:06.082626 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:06.082585 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-brjd8_2a5d03ae-4e40-4fa1-a3a7-8974e5fbabbb/konnectivity-agent/0.log" Apr 23 18:41:06.168573 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:06.168546 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-131-177.ec2.internal_50b2c37a28961f1c8aacb6ad5db58d22/haproxy/0.log" Apr 23 18:41:09.930081 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:09.930029 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-68b7b96464-lkmkn_176de401-a14e-4368-9fe1-d71e0e83bd52/metrics-server/0.log" Apr 23 18:41:09.959883 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:09.959843 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-7dccd58f55-hwwbn_bd43d44b-8bd6-4804-97cd-46ab34ff36c0/monitoring-plugin/0.log" Apr 23 18:41:10.105327 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.105252 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-kz8mr_38a98591-9ab8-4f58-81e9-3bc1f46d0756/node-exporter/0.log" Apr 23 18:41:10.140235 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.140196 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-kz8mr_38a98591-9ab8-4f58-81e9-3bc1f46d0756/kube-rbac-proxy/0.log" Apr 23 18:41:10.165025 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.164992 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-kz8mr_38a98591-9ab8-4f58-81e9-3bc1f46d0756/init-textfile/0.log" Apr 23 18:41:10.384013 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.383914 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4/prometheus/0.log" Apr 23 18:41:10.404181 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.404153 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4/config-reloader/0.log" Apr 23 18:41:10.431476 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.431447 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4/thanos-sidecar/0.log" Apr 23 18:41:10.460172 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.460149 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4/kube-rbac-proxy-web/0.log" Apr 23 18:41:10.489354 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.489322 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4/kube-rbac-proxy/0.log" Apr 23 18:41:10.529091 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.529063 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4/kube-rbac-proxy-thanos/0.log" Apr 23 18:41:10.569416 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.569382 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b5c4ddc7-dcce-47a4-9c79-b7f2dc5d86c4/init-config-reloader/0.log" Apr 23 18:41:10.721781 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.721693 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-85f5c4c5db-p8qfk_967f5182-e55d-45dc-b8fe-849dfa1d3b02/telemeter-client/0.log" Apr 23 18:41:10.746088 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.746054 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-85f5c4c5db-p8qfk_967f5182-e55d-45dc-b8fe-849dfa1d3b02/reload/0.log" Apr 23 18:41:10.769781 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:10.769743 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-85f5c4c5db-p8qfk_967f5182-e55d-45dc-b8fe-849dfa1d3b02/kube-rbac-proxy/0.log" Apr 23 18:41:12.954805 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:12.954771 2565 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv"] Apr 23 18:41:12.960583 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:12.960551 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:12.970134 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:12.970104 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv"] Apr 23 18:41:13.025473 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.025437 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-podres\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.025672 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.025499 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-proc\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.025672 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.025577 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-lib-modules\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.025672 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.025627 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl56x\" (UniqueName: \"kubernetes.io/projected/729da29c-eecb-4b5b-b91b-289d523f5cd6-kube-api-access-nl56x\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.025805 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.025689 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-sys\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.126359 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.126324 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-sys\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.126533 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.126372 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-podres\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.126533 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.126407 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-proc\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.126533 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.126431 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-lib-modules\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.126533 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.126446 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-sys\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.126533 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.126455 2565 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nl56x\" (UniqueName: \"kubernetes.io/projected/729da29c-eecb-4b5b-b91b-289d523f5cd6-kube-api-access-nl56x\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.126803 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.126616 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-podres\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.126803 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.126628 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-proc\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.126803 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.126619 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/729da29c-eecb-4b5b-b91b-289d523f5cd6-lib-modules\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.135129 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.135112 2565 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl56x\" (UniqueName: \"kubernetes.io/projected/729da29c-eecb-4b5b-b91b-289d523f5cd6-kube-api-access-nl56x\") pod \"perf-node-gather-daemonset-4cqwv\" (UID: \"729da29c-eecb-4b5b-b91b-289d523f5cd6\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.277174 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.277134 2565 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.411764 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.411731 2565 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv"] Apr 23 18:41:13.983195 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.983109 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" event={"ID":"729da29c-eecb-4b5b-b91b-289d523f5cd6","Type":"ContainerStarted","Data":"e9416c1a32647e125cf2deef1a6d1f5740de6b0d3017d2ccfd01d22fd55fa33d"} Apr 23 18:41:13.983195 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.983149 2565 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" event={"ID":"729da29c-eecb-4b5b-b91b-289d523f5cd6","Type":"ContainerStarted","Data":"f9211bff7432667f4f69e5c4e0963f78c332f0dd19d3027ed2e0def3b575d98f"} Apr 23 18:41:13.983195 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.983180 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:13.990019 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:13.989995 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-9vf42_be6942ff-e805-4834-ad15-79b60bde1296/dns/0.log" Apr 23 18:41:14.000559 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:14.000514 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" podStartSLOduration=2.000499771 podStartE2EDuration="2.000499771s" podCreationTimestamp="2026-04-23 18:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:41:13.998990825 +0000 UTC m=+2937.786730085" watchObservedRunningTime="2026-04-23 18:41:14.000499771 +0000 UTC m=+2937.788239027" Apr 23 18:41:14.012884 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:14.012864 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-9vf42_be6942ff-e805-4834-ad15-79b60bde1296/kube-rbac-proxy/0.log" Apr 23 18:41:14.158425 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:14.158394 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-gj4fr_4fe60531-7273-4ea9-b33c-0e4c909f6075/dns-node-resolver/0.log" Apr 23 18:41:14.614723 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:14.614693 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-54dc5c8cc9-q4z6v_502c5299-00b3-4f96-81c6-b7d544cfc79a/registry/0.log" Apr 23 18:41:14.684350 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:14.684320 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-q7b2g_ff40def5-69f9-4dad-aa0e-540f9bc631f0/node-ca/0.log" Apr 23 18:41:15.833674 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:15.833642 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-wvtqh_8d651935-ade8-4ad7-91b8-d50bd718e6d8/serve-healthcheck-canary/0.log" Apr 23 18:41:16.277726 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:16.277695 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-jf562_160b686c-d212-4373-af80-30f03f159c0b/kube-rbac-proxy/0.log" Apr 23 18:41:16.299275 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:16.299249 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-jf562_160b686c-d212-4373-af80-30f03f159c0b/exporter/0.log" Apr 23 18:41:16.325075 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:16.325039 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-jf562_160b686c-d212-4373-af80-30f03f159c0b/extractor/0.log" Apr 23 18:41:18.466749 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:18.466715 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_kserve-controller-manager-6fc5d867c5-6spwc_341bfc4a-0fe2-45c9-b978-b320668afd82/manager/0.log" Apr 23 18:41:18.526595 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:18.526557 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_model-serving-api-86f7b4b499-mm6kd_87bc6443-61bb-45e7-8e05-7e0d04f3ba76/server/0.log" Apr 23 18:41:19.996619 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:19.996587 2565 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-4cqwv" Apr 23 18:41:24.705888 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:24.705857 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gw5sh_e07a9b3c-f646-4dfc-bbb1-523478399c03/kube-multus-additional-cni-plugins/0.log" Apr 23 18:41:24.728907 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:24.728878 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gw5sh_e07a9b3c-f646-4dfc-bbb1-523478399c03/egress-router-binary-copy/0.log" Apr 23 18:41:24.754758 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:24.754727 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gw5sh_e07a9b3c-f646-4dfc-bbb1-523478399c03/cni-plugins/0.log" Apr 23 18:41:24.778294 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:24.778267 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gw5sh_e07a9b3c-f646-4dfc-bbb1-523478399c03/bond-cni-plugin/0.log" Apr 23 18:41:24.801161 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:24.801137 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gw5sh_e07a9b3c-f646-4dfc-bbb1-523478399c03/routeoverride-cni/0.log" Apr 23 18:41:24.824716 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:24.824685 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gw5sh_e07a9b3c-f646-4dfc-bbb1-523478399c03/whereabouts-cni-bincopy/0.log" Apr 23 18:41:24.848772 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:24.848705 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gw5sh_e07a9b3c-f646-4dfc-bbb1-523478399c03/whereabouts-cni/0.log" Apr 23 18:41:24.880977 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:24.880926 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtfpq_31e0ba61-35be-422e-9b2b-b9c49a736615/kube-multus/0.log" Apr 23 18:41:24.902220 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:24.902190 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-4fq2j_ab8d91e1-def0-4ec9-93d5-476175cef3cd/network-metrics-daemon/0.log" Apr 23 18:41:24.925517 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:24.925491 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-4fq2j_ab8d91e1-def0-4ec9-93d5-476175cef3cd/kube-rbac-proxy/0.log" Apr 23 18:41:26.065407 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:26.065373 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dghxj_f6bc6d36-30b4-4f0e-8a4e-46b934b798a5/ovn-controller/0.log" Apr 23 18:41:26.104401 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:26.104374 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dghxj_f6bc6d36-30b4-4f0e-8a4e-46b934b798a5/ovn-acl-logging/0.log" Apr 23 18:41:26.123334 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:26.123275 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dghxj_f6bc6d36-30b4-4f0e-8a4e-46b934b798a5/kube-rbac-proxy-node/0.log" Apr 23 18:41:26.147222 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:26.147188 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dghxj_f6bc6d36-30b4-4f0e-8a4e-46b934b798a5/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 18:41:26.169173 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:26.169144 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dghxj_f6bc6d36-30b4-4f0e-8a4e-46b934b798a5/northd/0.log" Apr 23 18:41:26.193450 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:26.193417 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dghxj_f6bc6d36-30b4-4f0e-8a4e-46b934b798a5/nbdb/0.log" Apr 23 18:41:26.216354 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:26.216326 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dghxj_f6bc6d36-30b4-4f0e-8a4e-46b934b798a5/sbdb/0.log" Apr 23 18:41:26.352176 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:26.352098 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dghxj_f6bc6d36-30b4-4f0e-8a4e-46b934b798a5/ovnkube-controller/0.log" Apr 23 18:41:27.666276 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:27.666199 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-7n2tw_133d891d-d4a9-44a1-ac6f-7a963f5616fe/network-check-target-container/0.log" Apr 23 18:41:28.670408 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:28.670378 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-nhffk_0174e24b-fc3a-4f0a-8388-9b31e5a92647/iptables-alerter/0.log" Apr 23 18:41:29.406214 ip-10-0-131-177 kubenswrapper[2565]: I0423 18:41:29.406189 2565 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-j46jf_31ce5a0e-448c-4e25-9118-102049e60bf2/tuned/0.log"