Apr 16 16:21:38.193007 ip-10-0-128-173 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 16 16:21:38.193021 ip-10-0-128-173 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 16 16:21:38.193031 ip-10-0-128-173 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 16 16:21:38.193228 ip-10-0-128-173 systemd[1]: Failed to start Kubernetes Kubelet. Apr 16 16:21:48.257015 ip-10-0-128-173 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 16 16:21:48.257033 ip-10-0-128-173 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot 47d7c520b59646d9b09e6462a16bebe7 -- Apr 16 16:24:03.062569 ip-10-0-128-173 systemd[1]: Starting Kubernetes Kubelet... Apr 16 16:24:03.656371 ip-10-0-128-173 kubenswrapper[2572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 16:24:03.656371 ip-10-0-128-173 kubenswrapper[2572]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 16 16:24:03.656371 ip-10-0-128-173 kubenswrapper[2572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 16:24:03.656371 ip-10-0-128-173 kubenswrapper[2572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 16:24:03.656371 ip-10-0-128-173 kubenswrapper[2572]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 16:24:03.657417 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.657292 2572 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 16:24:03.662097 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662081 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 16:24:03.662097 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662097 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662101 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662105 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662107 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662110 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662113 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662116 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662119 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662121 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662124 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662128 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662131 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662133 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662138 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662140 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662143 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662146 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662149 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662151 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662154 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 16:24:03.662158 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662157 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662160 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662163 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662166 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662169 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662172 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662177 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662181 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662184 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662187 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662191 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662194 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662197 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662200 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662203 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662205 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662208 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662211 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662214 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 16:24:03.662685 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662216 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662219 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662221 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662224 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662226 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662229 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662231 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662234 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662236 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662239 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662253 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662256 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662258 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662262 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662265 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662269 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662272 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662276 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662278 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662281 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 16 16:24:03.663147 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662284 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662286 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662289 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662292 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662295 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662297 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662307 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662310 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662313 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662315 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662318 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662321 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662323 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662326 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662328 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662331 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662333 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662336 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662338 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 16:24:03.663675 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662341 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662344 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662346 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662349 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662351 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662355 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662359 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662808 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662814 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662817 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662819 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662822 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662825 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662828 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662830 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662833 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662836 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662839 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662841 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 16:24:03.664137 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662851 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662853 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662856 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662859 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662861 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662864 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662866 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662869 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662872 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662874 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662877 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662879 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662882 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662885 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662889 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662892 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662895 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662897 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662899 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662902 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 16:24:03.664621 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662905 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662908 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662913 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662917 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662920 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662923 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662926 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662929 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662931 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662934 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662936 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662938 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662941 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662949 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662952 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662954 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662957 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662959 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662962 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662964 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 16:24:03.665152 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662967 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662971 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662973 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662975 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662978 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662980 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662983 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662985 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662988 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662990 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662993 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662996 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.662998 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663002 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663005 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663007 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663010 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663013 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663015 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663017 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 16:24:03.665706 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663020 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663022 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663025 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663027 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663030 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663032 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663040 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663043 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663045 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663047 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663050 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663052 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663055 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.663057 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664265 2572 flags.go:64] FLAG: --address="0.0.0.0" Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664275 2572 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664286 2572 flags.go:64] FLAG: --anonymous-auth="true" Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664291 2572 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664295 2572 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664298 2572 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664303 2572 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 16 16:24:03.666206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664308 2572 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664312 2572 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664314 2572 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664319 2572 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664323 2572 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664326 2572 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664329 2572 flags.go:64] FLAG: --cgroup-root="" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664332 2572 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664335 2572 flags.go:64] FLAG: --client-ca-file="" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664338 2572 flags.go:64] FLAG: --cloud-config="" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664341 2572 flags.go:64] FLAG: --cloud-provider="external" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664344 2572 flags.go:64] FLAG: --cluster-dns="[]" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664351 2572 flags.go:64] FLAG: --cluster-domain="" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664354 2572 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664357 2572 flags.go:64] FLAG: --config-dir="" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664360 2572 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664364 2572 flags.go:64] FLAG: --container-log-max-files="5" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664367 2572 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664376 2572 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664380 2572 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664383 2572 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664386 2572 flags.go:64] FLAG: --contention-profiling="false" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664389 2572 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664392 2572 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664395 2572 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 16 16:24:03.666748 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664398 2572 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664402 2572 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664405 2572 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664408 2572 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664411 2572 flags.go:64] FLAG: --enable-load-reader="false" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664414 2572 flags.go:64] FLAG: --enable-server="true" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664417 2572 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664424 2572 flags.go:64] FLAG: --event-burst="100" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664428 2572 flags.go:64] FLAG: --event-qps="50" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664431 2572 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664434 2572 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664437 2572 flags.go:64] FLAG: --eviction-hard="" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664442 2572 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664445 2572 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664448 2572 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664451 2572 flags.go:64] FLAG: --eviction-soft="" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664454 2572 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664457 2572 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664460 2572 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664463 2572 flags.go:64] FLAG: --experimental-mounter-path="" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664465 2572 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664468 2572 flags.go:64] FLAG: --fail-swap-on="true" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664471 2572 flags.go:64] FLAG: --feature-gates="" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664475 2572 flags.go:64] FLAG: --file-check-frequency="20s" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664478 2572 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 16 16:24:03.667426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664481 2572 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664490 2572 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664493 2572 flags.go:64] FLAG: --healthz-port="10248" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664497 2572 flags.go:64] FLAG: --help="false" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664500 2572 flags.go:64] FLAG: --hostname-override="ip-10-0-128-173.ec2.internal" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664503 2572 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664506 2572 flags.go:64] FLAG: --http-check-frequency="20s" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664508 2572 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664512 2572 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664515 2572 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664518 2572 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664521 2572 flags.go:64] FLAG: --image-service-endpoint="" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664524 2572 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664527 2572 flags.go:64] FLAG: --kube-api-burst="100" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664530 2572 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664533 2572 flags.go:64] FLAG: --kube-api-qps="50" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664536 2572 flags.go:64] FLAG: --kube-reserved="" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664539 2572 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664542 2572 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664545 2572 flags.go:64] FLAG: --kubelet-cgroups="" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664548 2572 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664551 2572 flags.go:64] FLAG: --lock-file="" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664554 2572 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664557 2572 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 16 16:24:03.668127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664560 2572 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664565 2572 flags.go:64] FLAG: --log-json-split-stream="false" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664569 2572 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664572 2572 flags.go:64] FLAG: --log-text-split-stream="false" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664575 2572 flags.go:64] FLAG: --logging-format="text" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664578 2572 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664581 2572 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664584 2572 flags.go:64] FLAG: --manifest-url="" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664587 2572 flags.go:64] FLAG: --manifest-url-header="" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664591 2572 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664601 2572 flags.go:64] FLAG: --max-open-files="1000000" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664606 2572 flags.go:64] FLAG: --max-pods="110" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664609 2572 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664612 2572 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664615 2572 flags.go:64] FLAG: --memory-manager-policy="None" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664618 2572 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664621 2572 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664624 2572 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664627 2572 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664641 2572 flags.go:64] FLAG: --node-status-max-images="50" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664644 2572 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664647 2572 flags.go:64] FLAG: --oom-score-adj="-999" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664650 2572 flags.go:64] FLAG: --pod-cidr="" Apr 16 16:24:03.668804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664653 2572 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc76bab72f320de3d4105c90d73c4fb139c09e20ce0fa8dcbc0cb59920d27dec" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664659 2572 flags.go:64] FLAG: --pod-manifest-path="" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664662 2572 flags.go:64] FLAG: --pod-max-pids="-1" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664665 2572 flags.go:64] FLAG: --pods-per-core="0" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664668 2572 flags.go:64] FLAG: --port="10250" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664671 2572 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664674 2572 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0a1dde0b34573e979" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664677 2572 flags.go:64] FLAG: --qos-reserved="" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664680 2572 flags.go:64] FLAG: --read-only-port="10255" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664683 2572 flags.go:64] FLAG: --register-node="true" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664686 2572 flags.go:64] FLAG: --register-schedulable="true" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664689 2572 flags.go:64] FLAG: --register-with-taints="" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664693 2572 flags.go:64] FLAG: --registry-burst="10" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664696 2572 flags.go:64] FLAG: --registry-qps="5" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664699 2572 flags.go:64] FLAG: --reserved-cpus="" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664702 2572 flags.go:64] FLAG: --reserved-memory="" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664706 2572 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664709 2572 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664712 2572 flags.go:64] FLAG: --rotate-certificates="false" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664714 2572 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664725 2572 flags.go:64] FLAG: --runonce="false" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664728 2572 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664731 2572 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664734 2572 flags.go:64] FLAG: --seccomp-default="false" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664737 2572 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664740 2572 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 16 16:24:03.669464 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664743 2572 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664746 2572 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664749 2572 flags.go:64] FLAG: --storage-driver-password="root" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664752 2572 flags.go:64] FLAG: --storage-driver-secure="false" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664755 2572 flags.go:64] FLAG: --storage-driver-table="stats" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664762 2572 flags.go:64] FLAG: --storage-driver-user="root" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664765 2572 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664774 2572 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664777 2572 flags.go:64] FLAG: --system-cgroups="" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664780 2572 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664787 2572 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664790 2572 flags.go:64] FLAG: --tls-cert-file="" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664793 2572 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664800 2572 flags.go:64] FLAG: --tls-min-version="" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664803 2572 flags.go:64] FLAG: --tls-private-key-file="" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664806 2572 flags.go:64] FLAG: --topology-manager-policy="none" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664809 2572 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664813 2572 flags.go:64] FLAG: --topology-manager-scope="container" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664816 2572 flags.go:64] FLAG: --v="2" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664824 2572 flags.go:64] FLAG: --version="false" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664829 2572 flags.go:64] FLAG: --vmodule="" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664833 2572 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.664836 2572 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.664978 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 16:24:03.670184 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.664982 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.664986 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.664989 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.664995 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.664998 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665001 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665004 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665007 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665010 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665012 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665015 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665017 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665020 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665024 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665027 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665029 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665032 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665034 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665037 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 16:24:03.670879 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665040 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665042 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665045 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665047 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665050 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665053 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665055 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665058 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665060 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665063 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665065 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665068 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665070 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665073 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665076 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665078 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665081 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665084 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665087 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665089 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 16:24:03.671437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665092 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665096 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665099 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665102 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665105 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665108 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665112 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665115 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665118 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665121 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665123 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665126 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665129 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665131 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665134 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665136 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665139 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665142 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665146 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 16:24:03.671931 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665151 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665154 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665157 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665160 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665162 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665165 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665168 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665170 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665173 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665176 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665178 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665181 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665184 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665187 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665189 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665192 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665194 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665197 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665199 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665203 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 16:24:03.672427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665206 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665208 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665211 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665213 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665216 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665219 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.665221 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.665954 2572 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.672499 2572 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.672517 2572 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672565 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672570 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672574 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672578 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672581 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 16:24:03.672928 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672584 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672586 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672589 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672592 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672595 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672597 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672600 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672602 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672605 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672608 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672610 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672613 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672616 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672618 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672621 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672623 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672626 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672629 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672632 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 16:24:03.673374 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672636 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672641 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672644 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672647 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672650 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672653 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672656 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672668 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672671 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672674 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672676 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672679 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672682 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672684 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672687 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672690 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672692 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672695 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672697 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672700 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 16:24:03.673856 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672703 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672705 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672708 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672711 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672713 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672716 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672719 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672721 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672724 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672726 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672729 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672732 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672734 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672737 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672739 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672742 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672745 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672748 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672750 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672753 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 16:24:03.674445 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672758 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672761 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672765 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672767 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672770 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672772 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672775 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672778 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672781 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672783 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672786 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672788 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672791 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672794 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672796 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672819 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672823 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672827 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672831 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672834 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 16:24:03.674933 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672837 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672840 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.672845 2572 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672961 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672966 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672969 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672972 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672975 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672979 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672981 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672984 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672988 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672992 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.672997 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 16:24:03.675463 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673000 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673003 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673006 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673009 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673012 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673014 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673017 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673019 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673021 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673024 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673026 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673029 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673032 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673034 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673037 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673039 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673042 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673044 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673047 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673050 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 16:24:03.675823 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673052 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673054 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673057 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673059 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673062 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673065 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673067 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673070 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673072 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673075 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673078 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673081 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673084 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673086 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673089 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673091 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673094 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673096 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673099 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673102 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 16:24:03.676346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673104 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673107 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673110 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673112 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673115 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673118 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673120 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673123 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673126 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673128 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673131 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673133 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673136 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673138 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673141 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673143 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673146 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673148 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673151 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673154 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 16:24:03.676871 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673157 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673159 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673162 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673165 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673167 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673170 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673172 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673175 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673177 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673180 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673182 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673185 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673187 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673190 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:03.673192 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.673196 2572 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 16:24:03.677399 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.673322 2572 server.go:962] "Client rotation is on, will bootstrap in background" Apr 16 16:24:03.677791 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.675710 2572 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 16 16:24:03.677791 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.676898 2572 server.go:1019] "Starting client certificate rotation" Apr 16 16:24:03.677791 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.677004 2572 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 16:24:03.677791 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.677043 2572 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 16:24:03.708445 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.708419 2572 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 16:24:03.715433 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.715394 2572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 16:24:03.738066 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.738041 2572 log.go:25] "Validated CRI v1 runtime API" Apr 16 16:24:03.746095 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.746068 2572 log.go:25] "Validated CRI v1 image API" Apr 16 16:24:03.747839 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.747815 2572 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 16:24:03.748917 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.748895 2572 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 16:24:03.758437 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.758415 2572 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/nvme0n1p2 7a9006e0-b483-4a7f-a3e4-6ac687ee0ade:/dev/nvme0n1p3 96a3b4a0-04b1-4d9f-8d17-99c1eef002f5:/dev/nvme0n1p4] Apr 16 16:24:03.758512 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.758454 2572 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 16 16:24:03.764771 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.764650 2572 manager.go:217] Machine: {Timestamp:2026-04-16 16:24:03.762376773 +0000 UTC m=+0.541223656 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3116068 MemoryCapacity:33164492800 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2d98b7f8a934def836b2fa8b9f87cb SystemUUID:ec2d98b7-f8a9-34de-f836-b2fa8b9f87cb BootID:47d7c520-b596-46d9-b09e-6462a16bebe7 Filesystems:[{Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6098944 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:78:68:20:45:bf Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:78:68:20:45:bf Speed:0 Mtu:9001} {Name:ovs-system MacAddress:2e:74:2b:48:20:63 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164492800 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 16 16:24:03.764771 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.764764 2572 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 16 16:24:03.764916 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.764903 2572 manager.go:233] Version: {KernelVersion:5.14.0-570.104.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260401-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 16 16:24:03.765341 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.765313 2572 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 16:24:03.765496 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.765343 2572 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-128-173.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 16:24:03.765546 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.765505 2572 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 16:24:03.765546 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.765514 2572 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 16:24:03.765546 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.765527 2572 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 16:24:03.766521 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.766510 2572 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 16:24:03.767841 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.767830 2572 state_mem.go:36] "Initialized new in-memory state store" Apr 16 16:24:03.767958 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.767949 2572 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 16 16:24:03.771342 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.771324 2572 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-shlsw" Apr 16 16:24:03.771384 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.771357 2572 kubelet.go:491] "Attempting to sync node with API server" Apr 16 16:24:03.771384 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.771374 2572 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 16:24:03.771444 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.771386 2572 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 16 16:24:03.771444 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.771402 2572 kubelet.go:397] "Adding apiserver pod source" Apr 16 16:24:03.771444 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.771423 2572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 16:24:03.772665 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.772651 2572 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 16:24:03.772718 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.772678 2572 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 16:24:03.776502 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.776486 2572 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 16 16:24:03.780003 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.779916 2572 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 16:24:03.781234 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.781204 2572 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-shlsw" Apr 16 16:24:03.782041 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782025 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 16 16:24:03.782106 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782051 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 16 16:24:03.782106 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782062 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 16 16:24:03.782106 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782071 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 16 16:24:03.782106 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782079 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 16 16:24:03.782106 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782087 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 16 16:24:03.782106 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782096 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 16 16:24:03.782106 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782103 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 16 16:24:03.782320 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782115 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 16 16:24:03.782320 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782124 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 16 16:24:03.782320 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782136 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 16 16:24:03.782320 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.782149 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 16 16:24:03.783301 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.783283 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 16 16:24:03.783301 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.783299 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 16 16:24:03.786910 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.786893 2572 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 16:24:03.787000 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.786934 2572 server.go:1295] "Started kubelet" Apr 16 16:24:03.787041 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.786991 2572 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 16:24:03.787082 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.787020 2572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 16:24:03.787114 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.787087 2572 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 16 16:24:03.787114 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.787094 2572 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 16:24:03.787736 ip-10-0-128-173 systemd[1]: Started Kubernetes Kubelet. Apr 16 16:24:03.789271 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.789232 2572 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 16:24:03.790151 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.790136 2572 server.go:317] "Adding debug handlers to kubelet server" Apr 16 16:24:03.790992 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.790976 2572 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 16:24:03.792105 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.792090 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-128-173.ec2.internal" not found Apr 16 16:24:03.796704 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.796682 2572 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 16 16:24:03.797317 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.797302 2572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 16:24:03.797935 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.797918 2572 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 16 16:24:03.798024 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.797938 2572 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 16:24:03.798088 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798031 2572 factory.go:55] Registering systemd factory Apr 16 16:24:03.798088 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798047 2572 factory.go:223] Registration of the systemd container factory successfully Apr 16 16:24:03.798088 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798050 2572 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 16:24:03.798215 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798101 2572 reconstruct.go:97] "Volume reconstruction finished" Apr 16 16:24:03.798215 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798109 2572 reconciler.go:26] "Reconciler: start to sync state" Apr 16 16:24:03.798337 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798316 2572 factory.go:153] Registering CRI-O factory Apr 16 16:24:03.798337 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798330 2572 factory.go:223] Registration of the crio container factory successfully Apr 16 16:24:03.798447 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798397 2572 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 16 16:24:03.798447 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798423 2572 factory.go:103] Registering Raw factory Apr 16 16:24:03.798447 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798439 2572 manager.go:1196] Started watching for new ooms in manager Apr 16 16:24:03.798877 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.798851 2572 manager.go:319] Starting recovery of all containers Apr 16 16:24:03.798877 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:03.798857 2572 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 16 16:24:03.799108 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:03.799059 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-128-173.ec2.internal\" not found" Apr 16 16:24:03.799758 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.799740 2572 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 16:24:03.802909 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:03.802888 2572 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-128-173.ec2.internal\" not found" node="ip-10-0-128-173.ec2.internal" Apr 16 16:24:03.811676 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.811654 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-128-173.ec2.internal" not found Apr 16 16:24:03.812299 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.812280 2572 manager.go:324] Recovery completed Apr 16 16:24:03.816618 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.816605 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 16:24:03.820029 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.820014 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-173.ec2.internal" event="NodeHasSufficientMemory" Apr 16 16:24:03.820103 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.820048 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-173.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 16:24:03.820103 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.820063 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-173.ec2.internal" event="NodeHasSufficientPID" Apr 16 16:24:03.820630 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.820615 2572 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 16 16:24:03.820630 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.820628 2572 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 16 16:24:03.820709 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.820645 2572 state_mem.go:36] "Initialized new in-memory state store" Apr 16 16:24:03.822514 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.822499 2572 policy_none.go:49] "None policy: Start" Apr 16 16:24:03.822582 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.822520 2572 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 16:24:03.822582 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.822533 2572 state_mem.go:35] "Initializing new in-memory state store" Apr 16 16:24:03.863959 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.863941 2572 manager.go:341] "Starting Device Plugin manager" Apr 16 16:24:03.881749 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:03.863981 2572 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 16:24:03.881749 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.863994 2572 server.go:85] "Starting device plugin registration server" Apr 16 16:24:03.881749 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.864308 2572 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 16:24:03.881749 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.864322 2572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 16:24:03.881749 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.864423 2572 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 16 16:24:03.881749 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.864519 2572 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 16 16:24:03.881749 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.864529 2572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 16:24:03.881749 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:03.865051 2572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 16 16:24:03.881749 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:03.865094 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-128-173.ec2.internal\" not found" Apr 16 16:24:03.881749 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.868328 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-128-173.ec2.internal" not found Apr 16 16:24:03.903107 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.903062 2572 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 16:24:03.904341 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.904324 2572 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 16:24:03.904393 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.904358 2572 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 16:24:03.904393 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.904383 2572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 16:24:03.904393 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.904392 2572 kubelet.go:2451] "Starting kubelet main sync loop" Apr 16 16:24:03.904513 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:03.904433 2572 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 16 16:24:03.908709 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.908655 2572 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 16:24:03.965559 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.965390 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 16:24:03.968208 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.968190 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-173.ec2.internal" event="NodeHasSufficientMemory" Apr 16 16:24:03.968317 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.968221 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-173.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 16:24:03.968317 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.968231 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-173.ec2.internal" event="NodeHasSufficientPID" Apr 16 16:24:03.968317 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.968278 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-128-173.ec2.internal" Apr 16 16:24:03.977633 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:03.977615 2572 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.005552 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.005526 2572 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal"] Apr 16 16:24:04.010687 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.010671 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.010760 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.010678 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.036931 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.036905 2572 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.041573 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.041556 2572 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.053775 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.053751 2572 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 16:24:04.053884 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.053756 2572 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 16:24:04.099855 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.099823 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b93a143e50d6e55a1c399c4b395a32e8-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal\" (UID: \"b93a143e50d6e55a1c399c4b395a32e8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.100004 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.099860 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b93a143e50d6e55a1c399c4b395a32e8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal\" (UID: \"b93a143e50d6e55a1c399c4b395a32e8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.100004 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.099884 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/a454133692b7d59775381b8452362b38-config\") pod \"kube-apiserver-proxy-ip-10-0-128-173.ec2.internal\" (UID: \"a454133692b7d59775381b8452362b38\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.200468 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.200395 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/a454133692b7d59775381b8452362b38-config\") pod \"kube-apiserver-proxy-ip-10-0-128-173.ec2.internal\" (UID: \"a454133692b7d59775381b8452362b38\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.200468 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.200427 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b93a143e50d6e55a1c399c4b395a32e8-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal\" (UID: \"b93a143e50d6e55a1c399c4b395a32e8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.200468 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.200450 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b93a143e50d6e55a1c399c4b395a32e8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal\" (UID: \"b93a143e50d6e55a1c399c4b395a32e8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.200667 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.200495 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/a454133692b7d59775381b8452362b38-config\") pod \"kube-apiserver-proxy-ip-10-0-128-173.ec2.internal\" (UID: \"a454133692b7d59775381b8452362b38\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.200667 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.200506 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b93a143e50d6e55a1c399c4b395a32e8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal\" (UID: \"b93a143e50d6e55a1c399c4b395a32e8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.200667 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.200510 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b93a143e50d6e55a1c399c4b395a32e8-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal\" (UID: \"b93a143e50d6e55a1c399c4b395a32e8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.358163 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.358128 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.358349 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.358125 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal" Apr 16 16:24:04.677107 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.677015 2572 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 16 16:24:04.677624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.677219 2572 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 16:24:04.677624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.677224 2572 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 16:24:04.677624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.677268 2572 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 16:24:04.772033 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.772004 2572 apiserver.go:52] "Watching apiserver" Apr 16 16:24:04.783323 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.783294 2572 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-15 16:19:03 +0000 UTC" deadline="2028-01-18 15:40:20.338999736 +0000 UTC" Apr 16 16:24:04.783323 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.783321 2572 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="15407h16m15.555681901s" Apr 16 16:24:04.786523 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.786505 2572 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 16 16:24:04.788919 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.788900 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-42w7g","openshift-multus/multus-additional-cni-plugins-gbkft","openshift-multus/multus-jzd86","openshift-network-operator/iptables-alerter-7962v","kube-system/konnectivity-agent-94pcx","kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal","openshift-multus/network-metrics-daemon-29gd5","openshift-network-diagnostics/network-check-target-225wk","openshift-ovn-kubernetes/ovnkube-node-csjjh","openshift-cluster-node-tuning-operator/tuned-rvs52","openshift-dns/node-resolver-xl6x7"] Apr 16 16:24:04.793270 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.793221 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.795390 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.795367 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:04.796807 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.796785 2572 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 16 16:24:04.796970 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.796952 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 16 16:24:04.797020 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.796971 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 16 16:24:04.797229 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.797217 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-svp7v\"" Apr 16 16:24:04.797527 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.797512 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.797895 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.797875 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 16 16:24:04.797991 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.797973 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 16 16:24:04.798049 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.797976 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 16 16:24:04.798674 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.798526 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-84d79\"" Apr 16 16:24:04.798674 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.798554 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 16 16:24:04.800239 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.800215 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 16 16:24:04.800344 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.800273 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-d5kdb\"" Apr 16 16:24:04.800662 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.800631 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 16 16:24:04.800743 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.800606 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 16 16:24:04.801614 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.801590 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:04.801707 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.801662 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:04.802784 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802760 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/e894fdf2-0ea2-43a3-a40d-03eca2359199-konnectivity-ca\") pod \"konnectivity-agent-94pcx\" (UID: \"e894fdf2-0ea2-43a3-a40d-03eca2359199\") " pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:04.802863 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802790 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-run-multus-certs\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.802863 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802811 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ds2\" (UniqueName: \"kubernetes.io/projected/0c4bc245-ba8c-4779-b28e-2628fba0297f-kube-api-access-k7ds2\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.802863 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802834 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-etc-kubernetes\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803028 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802868 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-var-lib-kubelet\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803028 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802896 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-kubelet-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.803028 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802921 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-device-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.803028 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802942 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-cni-dir\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803028 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802957 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-os-release\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803028 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802974 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a24802b0-e2f8-4e44-8234-6c63975e7440-cni-binary-copy\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803028 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.802989 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-socket-dir-parent\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803028 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803014 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5t58\" (UniqueName: \"kubernetes.io/projected/a24802b0-e2f8-4e44-8234-6c63975e7440-kube-api-access-h5t58\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803028 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803029 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/e894fdf2-0ea2-43a3-a40d-03eca2359199-agent-certs\") pod \"konnectivity-agent-94pcx\" (UID: \"e894fdf2-0ea2-43a3-a40d-03eca2359199\") " pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803044 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-etc-selinux\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803063 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/9adc281d-523f-4fc2-8784-22b5802e5ef5-iptables-alerter-script\") pod \"iptables-alerter-7962v\" (UID: \"9adc281d-523f-4fc2-8784-22b5802e5ef5\") " pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803084 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-run-k8s-cni-cncf-io\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803122 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-var-lib-cni-bin\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803163 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7b3cb7b5-891a-4226-b146-221600c3471c-tmp-dir\") pod \"node-resolver-xl6x7\" (UID: \"7b3cb7b5-891a-4226-b146-221600c3471c\") " pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803190 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-system-cni-dir\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803206 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-run-netns\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803239 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-conf-dir\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803291 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-socket-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803326 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9adc281d-523f-4fc2-8784-22b5802e5ef5-host-slash\") pod \"iptables-alerter-7962v\" (UID: \"9adc281d-523f-4fc2-8784-22b5802e5ef5\") " pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803368 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzlhr\" (UniqueName: \"kubernetes.io/projected/7b3cb7b5-891a-4226-b146-221600c3471c-kube-api-access-hzlhr\") pod \"node-resolver-xl6x7\" (UID: \"7b3cb7b5-891a-4226-b146-221600c3471c\") " pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803385 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-hostroot\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803408 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-cnibin\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803428 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-daemon-config\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803854 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803452 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-var-lib-cni-multus\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.803854 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803467 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-registration-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.803854 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803481 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxlqq\" (UniqueName: \"kubernetes.io/projected/9adc281d-523f-4fc2-8784-22b5802e5ef5-kube-api-access-sxlqq\") pod \"iptables-alerter-7962v\" (UID: \"9adc281d-523f-4fc2-8784-22b5802e5ef5\") " pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:04.803854 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803494 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7b3cb7b5-891a-4226-b146-221600c3471c-hosts-file\") pod \"node-resolver-xl6x7\" (UID: \"7b3cb7b5-891a-4226-b146-221600c3471c\") " pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:04.803854 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803508 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-sys-fs\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.804001 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.803896 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:04.804316 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.804300 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-2q4vz\"" Apr 16 16:24:04.805954 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.805937 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:04.806042 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:04.806023 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:04.807503 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.807481 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 16 16:24:04.807610 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.807491 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 16 16:24:04.807670 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.807616 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 16 16:24:04.807725 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.807716 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-qcdj8\"" Apr 16 16:24:04.807926 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.807908 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 16 16:24:04.808000 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.807930 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 16 16:24:04.808000 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.807934 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 16 16:24:04.808000 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.807934 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 16 16:24:04.808000 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.807939 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 16 16:24:04.808190 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.808181 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:04.808332 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:04.808227 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:04.809889 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.809868 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-f86px\"" Apr 16 16:24:04.810491 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.810473 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.812770 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.812756 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.814377 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.814362 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 16 16:24:04.814443 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.814402 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 16 16:24:04.814480 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.814448 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 16 16:24:04.814910 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.814897 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-62zc7\"" Apr 16 16:24:04.814948 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.814931 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 16 16:24:04.814948 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.814941 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 16 16:24:04.815025 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.815002 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:04.816121 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.816106 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 16 16:24:04.816488 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.816476 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 16 16:24:04.818023 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.818003 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 16 16:24:04.818145 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.818128 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-jtp8d\"" Apr 16 16:24:04.818298 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.818284 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 16 16:24:04.818507 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.818491 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 16 16:24:04.819019 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.819005 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-rnf7g\"" Apr 16 16:24:04.837064 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.837044 2572 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 16:24:04.837138 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.837120 2572 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-wnnjb" Apr 16 16:24:04.842503 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.842489 2572 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-wnnjb" Apr 16 16:24:04.898857 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.898837 2572 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 16:24:04.904366 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904348 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-etc-openvswitch\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.904460 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904373 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4d0d3535-86d0-4270-8087-38d613e5a0a5-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:04.904460 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904393 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grpxz\" (UniqueName: \"kubernetes.io/projected/5ec99558-e99b-4661-be0c-b68d311f226a-kube-api-access-grpxz\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:04.904460 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904409 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d-serviceca\") pod \"node-ca-42w7g\" (UID: \"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d\") " pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:04.904460 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904444 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb61c011-2a8e-4af9-b55e-b16d5f329215-tmp\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.904604 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904473 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-hostroot\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.904604 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904492 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-daemon-config\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.904604 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904509 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d0d3535-86d0-4270-8087-38d613e5a0a5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:04.904604 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904522 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-hostroot\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.904604 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904527 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-var-lib-cni-multus\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.904604 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904542 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sxlqq\" (UniqueName: \"kubernetes.io/projected/9adc281d-523f-4fc2-8784-22b5802e5ef5-kube-api-access-sxlqq\") pod \"iptables-alerter-7962v\" (UID: \"9adc281d-523f-4fc2-8784-22b5802e5ef5\") " pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:04.904604 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904558 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7b3cb7b5-891a-4226-b146-221600c3471c-hosts-file\") pod \"node-resolver-xl6x7\" (UID: \"7b3cb7b5-891a-4226-b146-221600c3471c\") " pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:04.904819 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904610 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7b3cb7b5-891a-4226-b146-221600c3471c-hosts-file\") pod \"node-resolver-xl6x7\" (UID: \"7b3cb7b5-891a-4226-b146-221600c3471c\") " pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:04.904819 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904598 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-var-lib-cni-multus\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.904819 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904630 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-tuned\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.904819 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904665 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-sys-fs\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.904819 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904689 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/e894fdf2-0ea2-43a3-a40d-03eca2359199-konnectivity-ca\") pod \"konnectivity-agent-94pcx\" (UID: \"e894fdf2-0ea2-43a3-a40d-03eca2359199\") " pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:04.904819 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904722 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-cnibin\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:04.904819 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904760 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-sys-fs\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.904819 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904764 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-run-ovn\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.904819 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904805 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f3381a4-23e0-42e8-b782-d6d4e6915910-ovnkube-config\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904823 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-host\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904846 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-etc-kubernetes\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904872 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904896 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-run-systemd\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904899 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-etc-kubernetes\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904922 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-var-lib-kubelet\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904942 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-kubelet-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904957 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-cni-dir\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904971 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-os-release\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.904993 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f3381a4-23e0-42e8-b782-d6d4e6915910-ovn-node-metrics-cert\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905011 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-sys\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905024 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-var-lib-kubelet\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905035 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-kubelet-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905042 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/9adc281d-523f-4fc2-8784-22b5802e5ef5-iptables-alerter-script\") pod \"iptables-alerter-7962v\" (UID: \"9adc281d-523f-4fc2-8784-22b5802e5ef5\") " pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905103 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-var-lib-kubelet\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905104 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-os-release\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.905144 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905127 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-cni-dir\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905135 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjgs\" (UniqueName: \"kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs\") pod \"network-check-target-225wk\" (UID: \"b3cf9670-2b3f-4b96-aff6-4c414454a507\") " pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905172 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d-host\") pod \"node-ca-42w7g\" (UID: \"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d\") " pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905207 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-run\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905233 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g67wj\" (UniqueName: \"kubernetes.io/projected/fb61c011-2a8e-4af9-b55e-b16d5f329215-kube-api-access-g67wj\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905280 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-conf-dir\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905320 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-sysconfig\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905346 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-socket-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905371 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9adc281d-523f-4fc2-8784-22b5802e5ef5-host-slash\") pod \"iptables-alerter-7962v\" (UID: \"9adc281d-523f-4fc2-8784-22b5802e5ef5\") " pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905379 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-conf-dir\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905396 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hzlhr\" (UniqueName: \"kubernetes.io/projected/7b3cb7b5-891a-4226-b146-221600c3471c-kube-api-access-hzlhr\") pod \"node-resolver-xl6x7\" (UID: \"7b3cb7b5-891a-4226-b146-221600c3471c\") " pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905425 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-systemd-units\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905434 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9adc281d-523f-4fc2-8784-22b5802e5ef5-host-slash\") pod \"iptables-alerter-7962v\" (UID: \"9adc281d-523f-4fc2-8784-22b5802e5ef5\") " pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905449 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-run-netns\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905458 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/9adc281d-523f-4fc2-8784-22b5802e5ef5-iptables-alerter-script\") pod \"iptables-alerter-7962v\" (UID: \"9adc281d-523f-4fc2-8784-22b5802e5ef5\") " pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905488 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-var-lib-openvswitch\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905517 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-cni-netd\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.905918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905517 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-socket-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905552 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f3381a4-23e0-42e8-b782-d6d4e6915910-ovnkube-script-lib\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905586 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-cnibin\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905613 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-os-release\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905637 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-run-ovn-kubernetes\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905662 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-kubernetes\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905688 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-cnibin\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905701 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-registration-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905733 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcmbz\" (UniqueName: \"kubernetes.io/projected/4d0d3535-86d0-4270-8087-38d613e5a0a5-kube-api-access-tcmbz\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905764 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-registration-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905768 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-kubelet\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905795 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-run-multus-certs\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905821 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br5l2\" (UniqueName: \"kubernetes.io/projected/387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d-kube-api-access-br5l2\") pod \"node-ca-42w7g\" (UID: \"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d\") " pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905890 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-sysctl-d\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905903 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-run-multus-certs\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905935 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7ds2\" (UniqueName: \"kubernetes.io/projected/0c4bc245-ba8c-4779-b28e-2628fba0297f-kube-api-access-k7ds2\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905964 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/e894fdf2-0ea2-43a3-a40d-03eca2359199-konnectivity-ca\") pod \"konnectivity-agent-94pcx\" (UID: \"e894fdf2-0ea2-43a3-a40d-03eca2359199\") " pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:04.906438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905970 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-system-cni-dir\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.905935 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-daemon-config\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906008 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906038 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4d0d3535-86d0-4270-8087-38d613e5a0a5-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906065 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906095 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxj2h\" (UniqueName: \"kubernetes.io/projected/6f3381a4-23e0-42e8-b782-d6d4e6915910-kube-api-access-vxj2h\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906124 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-device-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906149 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a24802b0-e2f8-4e44-8234-6c63975e7440-cni-binary-copy\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906173 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-socket-dir-parent\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906192 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h5t58\" (UniqueName: \"kubernetes.io/projected/a24802b0-e2f8-4e44-8234-6c63975e7440-kube-api-access-h5t58\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906207 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-slash\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906171 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-device-dir\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906226 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-run-openvswitch\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906241 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-multus-socket-dir-parent\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906274 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-log-socket\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906346 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-lib-modules\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906379 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/e894fdf2-0ea2-43a3-a40d-03eca2359199-agent-certs\") pod \"konnectivity-agent-94pcx\" (UID: \"e894fdf2-0ea2-43a3-a40d-03eca2359199\") " pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:04.907008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906406 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-etc-selinux\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906422 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-run-k8s-cni-cncf-io\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906437 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-var-lib-cni-bin\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906464 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7b3cb7b5-891a-4226-b146-221600c3471c-tmp-dir\") pod \"node-resolver-xl6x7\" (UID: \"7b3cb7b5-891a-4226-b146-221600c3471c\") " pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906491 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f3381a4-23e0-42e8-b782-d6d4e6915910-env-overrides\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906496 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-run-k8s-cni-cncf-io\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906515 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-modprobe-d\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906517 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0c4bc245-ba8c-4779-b28e-2628fba0297f-etc-selinux\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906516 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-var-lib-cni-bin\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906538 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-sysctl-conf\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906558 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-system-cni-dir\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906575 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-run-netns\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906600 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-node-log\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906612 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-system-cni-dir\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906633 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a24802b0-e2f8-4e44-8234-6c63975e7440-host-run-netns\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906635 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a24802b0-e2f8-4e44-8234-6c63975e7440-cni-binary-copy\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906681 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-cni-bin\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906703 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-systemd\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:04.907501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906707 2572 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 16 16:24:04.908079 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.906754 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7b3cb7b5-891a-4226-b146-221600c3471c-tmp-dir\") pod \"node-resolver-xl6x7\" (UID: \"7b3cb7b5-891a-4226-b146-221600c3471c\") " pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:04.909894 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.909878 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/e894fdf2-0ea2-43a3-a40d-03eca2359199-agent-certs\") pod \"konnectivity-agent-94pcx\" (UID: \"e894fdf2-0ea2-43a3-a40d-03eca2359199\") " pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:04.938700 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.938637 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzlhr\" (UniqueName: \"kubernetes.io/projected/7b3cb7b5-891a-4226-b146-221600c3471c-kube-api-access-hzlhr\") pod \"node-resolver-xl6x7\" (UID: \"7b3cb7b5-891a-4226-b146-221600c3471c\") " pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:04.939638 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.939610 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5t58\" (UniqueName: \"kubernetes.io/projected/a24802b0-e2f8-4e44-8234-6c63975e7440-kube-api-access-h5t58\") pod \"multus-jzd86\" (UID: \"a24802b0-e2f8-4e44-8234-6c63975e7440\") " pod="openshift-multus/multus-jzd86" Apr 16 16:24:04.941364 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.941343 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxlqq\" (UniqueName: \"kubernetes.io/projected/9adc281d-523f-4fc2-8784-22b5802e5ef5-kube-api-access-sxlqq\") pod \"iptables-alerter-7962v\" (UID: \"9adc281d-523f-4fc2-8784-22b5802e5ef5\") " pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:04.942894 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:04.942877 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7ds2\" (UniqueName: \"kubernetes.io/projected/0c4bc245-ba8c-4779-b28e-2628fba0297f-kube-api-access-k7ds2\") pod \"aws-ebs-csi-driver-node-f5rkn\" (UID: \"0c4bc245-ba8c-4779-b28e-2628fba0297f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:05.007454 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007427 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcmbz\" (UniqueName: \"kubernetes.io/projected/4d0d3535-86d0-4270-8087-38d613e5a0a5-kube-api-access-tcmbz\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.007590 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007458 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-kubelet\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.007590 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007476 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-br5l2\" (UniqueName: \"kubernetes.io/projected/387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d-kube-api-access-br5l2\") pod \"node-ca-42w7g\" (UID: \"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d\") " pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:05.007590 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007493 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-sysctl-d\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.007590 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007542 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-kubelet\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.007767 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007633 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-system-cni-dir\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.007767 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007650 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.007767 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007708 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-sysctl-d\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.007767 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007705 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-system-cni-dir\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.007767 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007734 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4d0d3535-86d0-4270-8087-38d613e5a0a5-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.007994 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007774 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.007994 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007801 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vxj2h\" (UniqueName: \"kubernetes.io/projected/6f3381a4-23e0-42e8-b782-d6d4e6915910-kube-api-access-vxj2h\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.007994 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007830 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-slash\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.007994 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007855 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-run-openvswitch\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.007994 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007894 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-slash\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.007994 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007911 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-run-openvswitch\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.007994 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.007905 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008021 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-log-socket\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008040 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-lib-modules\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008059 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f3381a4-23e0-42e8-b782-d6d4e6915910-env-overrides\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008062 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-log-socket\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008076 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-modprobe-d\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008143 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-sysctl-conf\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008154 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-lib-modules\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008184 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-node-log\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008205 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-modprobe-d\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008209 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4d0d3535-86d0-4270-8087-38d613e5a0a5-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008240 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-cni-bin\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008275 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-node-log\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008286 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-cni-bin\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008287 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-sysctl-conf\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008288 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-systemd\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.008324 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008330 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-systemd\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008342 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-etc-openvswitch\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008295 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008371 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4d0d3535-86d0-4270-8087-38d613e5a0a5-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008396 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-grpxz\" (UniqueName: \"kubernetes.io/projected/5ec99558-e99b-4661-be0c-b68d311f226a-kube-api-access-grpxz\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008400 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-etc-openvswitch\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008422 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d-serviceca\") pod \"node-ca-42w7g\" (UID: \"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d\") " pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008457 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb61c011-2a8e-4af9-b55e-b16d5f329215-tmp\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008487 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d0d3535-86d0-4270-8087-38d613e5a0a5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008506 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-tuned\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008524 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-cnibin\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008547 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-run-ovn\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008569 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f3381a4-23e0-42e8-b782-d6d4e6915910-ovnkube-config\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008591 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-cnibin\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008607 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-host\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008635 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008637 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-run-ovn\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008662 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-run-systemd\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008693 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f3381a4-23e0-42e8-b782-d6d4e6915910-ovn-node-metrics-cert\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008715 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-sys\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008737 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-var-lib-kubelet\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008732 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-host\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008769 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjgs\" (UniqueName: \"kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs\") pod \"network-check-target-225wk\" (UID: \"b3cf9670-2b3f-4b96-aff6-4c414454a507\") " pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008796 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d-host\") pod \"node-ca-42w7g\" (UID: \"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d\") " pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008821 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-run\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008844 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g67wj\" (UniqueName: \"kubernetes.io/projected/fb61c011-2a8e-4af9-b55e-b16d5f329215-kube-api-access-g67wj\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008871 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-sysconfig\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008899 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-systemd-units\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008915 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4d0d3535-86d0-4270-8087-38d613e5a0a5-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008961 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-run-systemd\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008977 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-run-netns\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008979 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d-host\") pod \"node-ca-42w7g\" (UID: \"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d\") " pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009015 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d0d3535-86d0-4270-8087-38d613e5a0a5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009024 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-run\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.008920 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-run-netns\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.009768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009046 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-systemd-units\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009065 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-var-lib-openvswitch\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009091 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-cni-netd\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009118 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-sysconfig\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009127 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f3381a4-23e0-42e8-b782-d6d4e6915910-ovnkube-script-lib\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009141 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f3381a4-23e0-42e8-b782-d6d4e6915910-env-overrides\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009156 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-os-release\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009164 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-var-lib-openvswitch\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009171 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d-serviceca\") pod \"node-ca-42w7g\" (UID: \"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d\") " pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009297 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-run-ovn-kubernetes\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009338 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-kubernetes\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009369 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-run-ovn-kubernetes\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009394 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4d0d3535-86d0-4270-8087-38d613e5a0a5-os-release\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009429 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-kubernetes\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009458 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f3381a4-23e0-42e8-b782-d6d4e6915910-ovnkube-config\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.009485 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009561 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f3381a4-23e0-42e8-b782-d6d4e6915910-host-cni-netd\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010327 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.009582 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs podName:5ec99558-e99b-4661-be0c-b68d311f226a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:05.509544428 +0000 UTC m=+2.288391299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs") pod "network-metrics-daemon-29gd5" (UID: "5ec99558-e99b-4661-be0c-b68d311f226a") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:05.010918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009616 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-var-lib-kubelet\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.010918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.009784 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fb61c011-2a8e-4af9-b55e-b16d5f329215-sys\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.010918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.010155 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f3381a4-23e0-42e8-b782-d6d4e6915910-ovnkube-script-lib\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.010918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.010824 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb61c011-2a8e-4af9-b55e-b16d5f329215-tmp\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.011261 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.011230 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f3381a4-23e0-42e8-b782-d6d4e6915910-ovn-node-metrics-cert\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.011646 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.011620 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/fb61c011-2a8e-4af9-b55e-b16d5f329215-etc-tuned\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.031808 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.031780 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcmbz\" (UniqueName: \"kubernetes.io/projected/4d0d3535-86d0-4270-8087-38d613e5a0a5-kube-api-access-tcmbz\") pod \"multus-additional-cni-plugins-gbkft\" (UID: \"4d0d3535-86d0-4270-8087-38d613e5a0a5\") " pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.033005 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.032975 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 16:24:05.033005 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.032997 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 16:24:05.033173 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.033010 2572 projected.go:194] Error preparing data for projected volume kube-api-access-7vjgs for pod openshift-network-diagnostics/network-check-target-225wk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:05.033173 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.033076 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs podName:b3cf9670-2b3f-4b96-aff6-4c414454a507 nodeName:}" failed. No retries permitted until 2026-04-16 16:24:05.533058498 +0000 UTC m=+2.311905373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7vjgs" (UniqueName: "kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs") pod "network-check-target-225wk" (UID: "b3cf9670-2b3f-4b96-aff6-4c414454a507") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:05.035092 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.035064 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g67wj\" (UniqueName: \"kubernetes.io/projected/fb61c011-2a8e-4af9-b55e-b16d5f329215-kube-api-access-g67wj\") pod \"tuned-rvs52\" (UID: \"fb61c011-2a8e-4af9-b55e-b16d5f329215\") " pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.035285 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.035263 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxj2h\" (UniqueName: \"kubernetes.io/projected/6f3381a4-23e0-42e8-b782-d6d4e6915910-kube-api-access-vxj2h\") pod \"ovnkube-node-csjjh\" (UID: \"6f3381a4-23e0-42e8-b782-d6d4e6915910\") " pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.035494 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.035474 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-grpxz\" (UniqueName: \"kubernetes.io/projected/5ec99558-e99b-4661-be0c-b68d311f226a-kube-api-access-grpxz\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:05.037165 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.037146 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-br5l2\" (UniqueName: \"kubernetes.io/projected/387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d-kube-api-access-br5l2\") pod \"node-ca-42w7g\" (UID: \"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d\") " pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:05.111818 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.111789 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jzd86" Apr 16 16:24:05.126400 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.126371 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xl6x7" Apr 16 16:24:05.144360 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.144338 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-ngxj6"] Apr 16 16:24:05.144727 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.144713 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" Apr 16 16:24:05.147612 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.147596 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:05.147687 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.147657 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:05.150654 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.150629 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-7962v" Apr 16 16:24:05.166399 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.166379 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:05.171406 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.171361 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb93a143e50d6e55a1c399c4b395a32e8.slice/crio-545b99c6a016d6c11b901b5184703d2d71402018e3331568da6e86aa95a766e8 WatchSource:0}: Error finding container 545b99c6a016d6c11b901b5184703d2d71402018e3331568da6e86aa95a766e8: Status 404 returned error can't find the container with id 545b99c6a016d6c11b901b5184703d2d71402018e3331568da6e86aa95a766e8 Apr 16 16:24:05.171947 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.171903 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9adc281d_523f_4fc2_8784_22b5802e5ef5.slice/crio-f62cf39b8189b9d163f4250907d27e9710752500d0658d48b7377c1e3da7533e WatchSource:0}: Error finding container f62cf39b8189b9d163f4250907d27e9710752500d0658d48b7377c1e3da7533e: Status 404 returned error can't find the container with id f62cf39b8189b9d163f4250907d27e9710752500d0658d48b7377c1e3da7533e Apr 16 16:24:05.172991 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.172968 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c4bc245_ba8c_4779_b28e_2628fba0297f.slice/crio-c4460478bcdf6e4f95c185d73ae6cfa352ce01bd59d83e21e73d8c5d3460de17 WatchSource:0}: Error finding container c4460478bcdf6e4f95c185d73ae6cfa352ce01bd59d83e21e73d8c5d3460de17: Status 404 returned error can't find the container with id c4460478bcdf6e4f95c185d73ae6cfa352ce01bd59d83e21e73d8c5d3460de17 Apr 16 16:24:05.174105 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.174084 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode894fdf2_0ea2_43a3_a40d_03eca2359199.slice/crio-5a87cbce183f2cd94efe50db0c57431f2437fb3bc182e0154a64e083e33fe511 WatchSource:0}: Error finding container 5a87cbce183f2cd94efe50db0c57431f2437fb3bc182e0154a64e083e33fe511: Status 404 returned error can't find the container with id 5a87cbce183f2cd94efe50db0c57431f2437fb3bc182e0154a64e083e33fe511 Apr 16 16:24:05.177189 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.177175 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 16:24:05.179895 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.179880 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-42w7g" Apr 16 16:24:05.185294 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.185268 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod387e0e6a_5ac4_4dc7_a1df_d6fae9770c8d.slice/crio-7992af3114d6dcf292b467dcb3ba68bfe7ebb5509243e36344c4ca8ab81780a3 WatchSource:0}: Error finding container 7992af3114d6dcf292b467dcb3ba68bfe7ebb5509243e36344c4ca8ab81780a3: Status 404 returned error can't find the container with id 7992af3114d6dcf292b467dcb3ba68bfe7ebb5509243e36344c4ca8ab81780a3 Apr 16 16:24:05.209448 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.209382 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:05.210957 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.210930 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:05.211105 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.210964 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-kubelet-config\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:05.211105 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.210990 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-dbus\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:05.214699 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.214677 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f3381a4_23e0_42e8_b782_d6d4e6915910.slice/crio-120d2379cd3b8617683ee83cd917e06c7e3092441033a7661e04622b1ce53cc7 WatchSource:0}: Error finding container 120d2379cd3b8617683ee83cd917e06c7e3092441033a7661e04622b1ce53cc7: Status 404 returned error can't find the container with id 120d2379cd3b8617683ee83cd917e06c7e3092441033a7661e04622b1ce53cc7 Apr 16 16:24:05.217438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.217423 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-rvs52" Apr 16 16:24:05.221902 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.221883 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gbkft" Apr 16 16:24:05.222942 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.222926 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb61c011_2a8e_4af9_b55e_b16d5f329215.slice/crio-2fdebf23df95bf7134631b328e35d83f14e71f5d87710618027bb98cb0f71a20 WatchSource:0}: Error finding container 2fdebf23df95bf7134631b328e35d83f14e71f5d87710618027bb98cb0f71a20: Status 404 returned error can't find the container with id 2fdebf23df95bf7134631b328e35d83f14e71f5d87710618027bb98cb0f71a20 Apr 16 16:24:05.227912 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.227892 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d0d3535_86d0_4270_8087_38d613e5a0a5.slice/crio-4749a7705cb00b56d63bc85c9d76e52b47d5e31ac5f9ebf8da78e6ee1095720d WatchSource:0}: Error finding container 4749a7705cb00b56d63bc85c9d76e52b47d5e31ac5f9ebf8da78e6ee1095720d: Status 404 returned error can't find the container with id 4749a7705cb00b56d63bc85c9d76e52b47d5e31ac5f9ebf8da78e6ee1095720d Apr 16 16:24:05.253317 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.253292 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda454133692b7d59775381b8452362b38.slice/crio-9908045ae80f8954648132733b5f0861549480acc1f2ce8d2fda9c4964c218fe WatchSource:0}: Error finding container 9908045ae80f8954648132733b5f0861549480acc1f2ce8d2fda9c4964c218fe: Status 404 returned error can't find the container with id 9908045ae80f8954648132733b5f0861549480acc1f2ce8d2fda9c4964c218fe Apr 16 16:24:05.312214 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.312183 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:05.312214 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.312219 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-kubelet-config\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:05.312438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.312258 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-dbus\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:05.312438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.312334 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-kubelet-config\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:05.312438 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.312366 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:05.312438 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.312381 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-dbus\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:05.312438 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.312415 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret podName:c1e6d4a3-5bed-4a4a-982e-2bc52481870a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:05.812400784 +0000 UTC m=+2.591247658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret") pod "global-pull-secret-syncer-ngxj6" (UID: "c1e6d4a3-5bed-4a4a-982e-2bc52481870a") : object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:05.427307 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.427280 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda24802b0_e2f8_4e44_8234_6c63975e7440.slice/crio-dd6ba3ef41012b8e6cdca794d0ea954efce9c87f0372d01f1d08d4d7e2f620a4 WatchSource:0}: Error finding container dd6ba3ef41012b8e6cdca794d0ea954efce9c87f0372d01f1d08d4d7e2f620a4: Status 404 returned error can't find the container with id dd6ba3ef41012b8e6cdca794d0ea954efce9c87f0372d01f1d08d4d7e2f620a4 Apr 16 16:24:05.493259 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.493165 2572 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 16:24:05.513708 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.513684 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:05.513841 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.513826 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:05.513891 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.513881 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs podName:5ec99558-e99b-4661-be0c-b68d311f226a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:06.513866142 +0000 UTC m=+3.292713011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs") pod "network-metrics-daemon-29gd5" (UID: "5ec99558-e99b-4661-be0c-b68d311f226a") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:05.615533 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.614437 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjgs\" (UniqueName: \"kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs\") pod \"network-check-target-225wk\" (UID: \"b3cf9670-2b3f-4b96-aff6-4c414454a507\") " pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:05.615533 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.614711 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 16:24:05.615533 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.614733 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 16:24:05.615533 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.614745 2572 projected.go:194] Error preparing data for projected volume kube-api-access-7vjgs for pod openshift-network-diagnostics/network-check-target-225wk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:05.615533 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.614807 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs podName:b3cf9670-2b3f-4b96-aff6-4c414454a507 nodeName:}" failed. No retries permitted until 2026-04-16 16:24:06.614787594 +0000 UTC m=+3.393634483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7vjgs" (UniqueName: "kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs") pod "network-check-target-225wk" (UID: "b3cf9670-2b3f-4b96-aff6-4c414454a507") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:05.642917 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.642742 2572 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 16:24:05.688481 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:05.688437 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b3cb7b5_891a_4226_b146_221600c3471c.slice/crio-70bae2cb1668ad8e492436d4883590f16355a9a60a715a0402b227c4b2ce6ce6 WatchSource:0}: Error finding container 70bae2cb1668ad8e492436d4883590f16355a9a60a715a0402b227c4b2ce6ce6: Status 404 returned error can't find the container with id 70bae2cb1668ad8e492436d4883590f16355a9a60a715a0402b227c4b2ce6ce6 Apr 16 16:24:05.816506 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.816415 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:05.816673 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.816617 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:05.816734 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.816684 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret podName:c1e6d4a3-5bed-4a4a-982e-2bc52481870a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:06.816666034 +0000 UTC m=+3.595512921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret") pod "global-pull-secret-syncer-ngxj6" (UID: "c1e6d4a3-5bed-4a4a-982e-2bc52481870a") : object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:05.844280 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.844222 2572 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 16:19:04 +0000 UTC" deadline="2027-11-18 11:30:19.467434815 +0000 UTC" Apr 16 16:24:05.844280 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.844277 2572 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13939h6m13.623161703s" Apr 16 16:24:05.860868 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.860838 2572 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 16:24:05.905437 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.905407 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:05.905632 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:05.905559 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:05.920653 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.920595 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-7962v" event={"ID":"9adc281d-523f-4fc2-8784-22b5802e5ef5","Type":"ContainerStarted","Data":"f62cf39b8189b9d163f4250907d27e9710752500d0658d48b7377c1e3da7533e"} Apr 16 16:24:05.926523 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.926490 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" event={"ID":"0c4bc245-ba8c-4779-b28e-2628fba0297f","Type":"ContainerStarted","Data":"c4460478bcdf6e4f95c185d73ae6cfa352ce01bd59d83e21e73d8c5d3460de17"} Apr 16 16:24:05.932624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.932593 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xl6x7" event={"ID":"7b3cb7b5-891a-4226-b146-221600c3471c","Type":"ContainerStarted","Data":"70bae2cb1668ad8e492436d4883590f16355a9a60a715a0402b227c4b2ce6ce6"} Apr 16 16:24:05.939182 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.939153 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jzd86" event={"ID":"a24802b0-e2f8-4e44-8234-6c63975e7440","Type":"ContainerStarted","Data":"dd6ba3ef41012b8e6cdca794d0ea954efce9c87f0372d01f1d08d4d7e2f620a4"} Apr 16 16:24:05.947537 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.947507 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal" event={"ID":"a454133692b7d59775381b8452362b38","Type":"ContainerStarted","Data":"9908045ae80f8954648132733b5f0861549480acc1f2ce8d2fda9c4964c218fe"} Apr 16 16:24:05.952547 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.952325 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbkft" event={"ID":"4d0d3535-86d0-4270-8087-38d613e5a0a5","Type":"ContainerStarted","Data":"4749a7705cb00b56d63bc85c9d76e52b47d5e31ac5f9ebf8da78e6ee1095720d"} Apr 16 16:24:05.957286 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.957216 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-rvs52" event={"ID":"fb61c011-2a8e-4af9-b55e-b16d5f329215","Type":"ContainerStarted","Data":"2fdebf23df95bf7134631b328e35d83f14e71f5d87710618027bb98cb0f71a20"} Apr 16 16:24:05.971778 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.971743 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-42w7g" event={"ID":"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d","Type":"ContainerStarted","Data":"7992af3114d6dcf292b467dcb3ba68bfe7ebb5509243e36344c4ca8ab81780a3"} Apr 16 16:24:05.977620 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.977592 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-94pcx" event={"ID":"e894fdf2-0ea2-43a3-a40d-03eca2359199","Type":"ContainerStarted","Data":"5a87cbce183f2cd94efe50db0c57431f2437fb3bc182e0154a64e083e33fe511"} Apr 16 16:24:05.985425 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.985397 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" event={"ID":"b93a143e50d6e55a1c399c4b395a32e8","Type":"ContainerStarted","Data":"545b99c6a016d6c11b901b5184703d2d71402018e3331568da6e86aa95a766e8"} Apr 16 16:24:05.994476 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:05.994439 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" event={"ID":"6f3381a4-23e0-42e8-b782-d6d4e6915910","Type":"ContainerStarted","Data":"120d2379cd3b8617683ee83cd917e06c7e3092441033a7661e04622b1ce53cc7"} Apr 16 16:24:06.522108 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:06.522071 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:06.522336 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:06.522237 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:06.522336 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:06.522326 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs podName:5ec99558-e99b-4661-be0c-b68d311f226a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:08.522306655 +0000 UTC m=+5.301153540 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs") pod "network-metrics-daemon-29gd5" (UID: "5ec99558-e99b-4661-be0c-b68d311f226a") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:06.622652 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:06.622614 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjgs\" (UniqueName: \"kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs\") pod \"network-check-target-225wk\" (UID: \"b3cf9670-2b3f-4b96-aff6-4c414454a507\") " pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:06.622863 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:06.622841 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 16:24:06.622924 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:06.622873 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 16:24:06.622924 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:06.622887 2572 projected.go:194] Error preparing data for projected volume kube-api-access-7vjgs for pod openshift-network-diagnostics/network-check-target-225wk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:06.623043 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:06.622946 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs podName:b3cf9670-2b3f-4b96-aff6-4c414454a507 nodeName:}" failed. No retries permitted until 2026-04-16 16:24:08.622927944 +0000 UTC m=+5.401774828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7vjgs" (UniqueName: "kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs") pod "network-check-target-225wk" (UID: "b3cf9670-2b3f-4b96-aff6-4c414454a507") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:06.825343 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:06.825240 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:06.825777 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:06.825439 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:06.825777 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:06.825502 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret podName:c1e6d4a3-5bed-4a4a-982e-2bc52481870a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:08.825482881 +0000 UTC m=+5.604329770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret") pod "global-pull-secret-syncer-ngxj6" (UID: "c1e6d4a3-5bed-4a4a-982e-2bc52481870a") : object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:06.845118 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:06.845079 2572 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 16:19:04 +0000 UTC" deadline="2027-09-10 18:28:53.228568714 +0000 UTC" Apr 16 16:24:06.845118 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:06.845115 2572 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12290h4m46.383456873s" Apr 16 16:24:06.905650 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:06.905617 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:06.905854 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:06.905756 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:06.905933 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:06.905905 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:06.906050 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:06.906026 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:07.906298 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:07.905556 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:07.906298 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:07.905682 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:08.541345 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:08.541304 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:08.541571 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:08.541546 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:08.541661 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:08.541630 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs podName:5ec99558-e99b-4661-be0c-b68d311f226a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:12.541608844 +0000 UTC m=+9.320455730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs") pod "network-metrics-daemon-29gd5" (UID: "5ec99558-e99b-4661-be0c-b68d311f226a") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:08.642645 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:08.642614 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjgs\" (UniqueName: \"kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs\") pod \"network-check-target-225wk\" (UID: \"b3cf9670-2b3f-4b96-aff6-4c414454a507\") " pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:08.642825 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:08.642798 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 16:24:08.642825 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:08.642821 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 16:24:08.642939 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:08.642834 2572 projected.go:194] Error preparing data for projected volume kube-api-access-7vjgs for pod openshift-network-diagnostics/network-check-target-225wk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:08.642939 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:08.642900 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs podName:b3cf9670-2b3f-4b96-aff6-4c414454a507 nodeName:}" failed. No retries permitted until 2026-04-16 16:24:12.642879908 +0000 UTC m=+9.421726796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7vjgs" (UniqueName: "kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs") pod "network-check-target-225wk" (UID: "b3cf9670-2b3f-4b96-aff6-4c414454a507") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:08.844932 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:08.844848 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:08.845102 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:08.845000 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:08.845102 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:08.845071 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret podName:c1e6d4a3-5bed-4a4a-982e-2bc52481870a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:12.84505211 +0000 UTC m=+9.623898997 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret") pod "global-pull-secret-syncer-ngxj6" (UID: "c1e6d4a3-5bed-4a4a-982e-2bc52481870a") : object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:08.905113 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:08.904613 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:08.905113 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:08.904644 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:08.905113 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:08.904769 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:08.905113 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:08.904909 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:09.905962 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:09.905327 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:09.905962 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:09.905459 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:10.904989 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:10.904938 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:10.905221 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:10.905080 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:10.905479 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:10.904938 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:10.905603 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:10.905490 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:11.905518 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:11.905480 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:11.905968 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:11.905624 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:12.577680 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:12.577599 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:12.577836 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:12.577763 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:12.577878 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:12.577842 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs podName:5ec99558-e99b-4661-be0c-b68d311f226a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:20.577820284 +0000 UTC m=+17.356667165 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs") pod "network-metrics-daemon-29gd5" (UID: "5ec99558-e99b-4661-be0c-b68d311f226a") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:12.679683 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:12.679014 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjgs\" (UniqueName: \"kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs\") pod \"network-check-target-225wk\" (UID: \"b3cf9670-2b3f-4b96-aff6-4c414454a507\") " pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:12.679683 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:12.679180 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 16:24:12.679683 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:12.679200 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 16:24:12.679683 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:12.679213 2572 projected.go:194] Error preparing data for projected volume kube-api-access-7vjgs for pod openshift-network-diagnostics/network-check-target-225wk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:12.679683 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:12.679299 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs podName:b3cf9670-2b3f-4b96-aff6-4c414454a507 nodeName:}" failed. No retries permitted until 2026-04-16 16:24:20.679279952 +0000 UTC m=+17.458126836 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7vjgs" (UniqueName: "kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs") pod "network-check-target-225wk" (UID: "b3cf9670-2b3f-4b96-aff6-4c414454a507") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:12.881497 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:12.881333 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:12.881703 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:12.881501 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:12.881703 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:12.881630 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret podName:c1e6d4a3-5bed-4a4a-982e-2bc52481870a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:20.881586346 +0000 UTC m=+17.660433226 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret") pod "global-pull-secret-syncer-ngxj6" (UID: "c1e6d4a3-5bed-4a4a-982e-2bc52481870a") : object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:12.905427 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:12.905387 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:12.905643 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:12.905505 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:12.906071 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:12.905885 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:12.906071 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:12.905995 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:13.906293 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:13.906241 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:13.906710 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:13.906377 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:14.905187 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:14.905145 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:14.905401 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:14.905162 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:14.905401 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:14.905285 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:14.905401 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:14.905353 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:15.905332 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:15.905290 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:15.905796 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:15.905417 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:16.904826 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:16.904790 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:16.904998 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:16.904790 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:16.904998 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:16.904908 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:16.905098 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:16.905021 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:17.905371 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:17.905337 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:17.905844 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:17.905473 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:18.905571 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:18.905545 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:18.905895 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:18.905548 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:18.905895 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:18.905652 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:18.905895 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:18.905726 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:19.905262 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:19.905207 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:19.905444 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:19.905351 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:20.635412 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:20.635358 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:20.635884 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:20.635526 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:20.635884 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:20.635610 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs podName:5ec99558-e99b-4661-be0c-b68d311f226a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:36.635588996 +0000 UTC m=+33.414435876 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs") pod "network-metrics-daemon-29gd5" (UID: "5ec99558-e99b-4661-be0c-b68d311f226a") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:20.736376 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:20.736331 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjgs\" (UniqueName: \"kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs\") pod \"network-check-target-225wk\" (UID: \"b3cf9670-2b3f-4b96-aff6-4c414454a507\") " pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:20.736574 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:20.736502 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 16:24:20.736574 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:20.736528 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 16:24:20.736574 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:20.736541 2572 projected.go:194] Error preparing data for projected volume kube-api-access-7vjgs for pod openshift-network-diagnostics/network-check-target-225wk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:20.736712 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:20.736609 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs podName:b3cf9670-2b3f-4b96-aff6-4c414454a507 nodeName:}" failed. No retries permitted until 2026-04-16 16:24:36.736588989 +0000 UTC m=+33.515435871 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-7vjgs" (UniqueName: "kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs") pod "network-check-target-225wk" (UID: "b3cf9670-2b3f-4b96-aff6-4c414454a507") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:20.905256 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:20.905156 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:20.905474 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:20.905156 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:20.905474 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:20.905301 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:20.905474 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:20.905347 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:20.938679 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:20.938637 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:20.938866 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:20.938752 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:20.938866 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:20.938831 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret podName:c1e6d4a3-5bed-4a4a-982e-2bc52481870a nodeName:}" failed. No retries permitted until 2026-04-16 16:24:36.938809903 +0000 UTC m=+33.717656789 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret") pod "global-pull-secret-syncer-ngxj6" (UID: "c1e6d4a3-5bed-4a4a-982e-2bc52481870a") : object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:21.905680 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:21.905636 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:21.906138 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:21.905777 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:22.905057 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:22.905018 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:22.905350 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:22.905155 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:22.905350 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:22.905206 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:22.905350 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:22.905323 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:23.905719 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:23.905533 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:23.906192 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:23.905779 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:24.053929 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:24.053869 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jzd86" event={"ID":"a24802b0-e2f8-4e44-8234-6c63975e7440","Type":"ContainerStarted","Data":"4edff550d3c02a5f5d6a3cb5475a3c1a1abc4b29dfa4b8e4e0053bf40557b731"} Apr 16 16:24:24.055686 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:24.055576 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal" event={"ID":"a454133692b7d59775381b8452362b38","Type":"ContainerStarted","Data":"c248c9b4ceb7d5dcea7fa225869ea94d1a18ff04cc491c54c9856d21bdf9bb20"} Apr 16 16:24:24.056947 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:24.056922 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-rvs52" event={"ID":"fb61c011-2a8e-4af9-b55e-b16d5f329215","Type":"ContainerStarted","Data":"8fe94c6a8dd2120372a6d450b2b6e60af4ec46c8e41bdd30daad454a071b0861"} Apr 16 16:24:24.074862 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:24.074809 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jzd86" podStartSLOduration=2.967054254 podStartE2EDuration="21.074789501s" podCreationTimestamp="2026-04-16 16:24:03 +0000 UTC" firstStartedPulling="2026-04-16 16:24:05.428916128 +0000 UTC m=+2.207762996" lastFinishedPulling="2026-04-16 16:24:23.53665136 +0000 UTC m=+20.315498243" observedRunningTime="2026-04-16 16:24:24.074370697 +0000 UTC m=+20.853217590" watchObservedRunningTime="2026-04-16 16:24:24.074789501 +0000 UTC m=+20.853636394" Apr 16 16:24:24.099794 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:24.099734 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-rvs52" podStartSLOduration=1.863686742 podStartE2EDuration="20.099714806s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="2026-04-16 16:24:05.225086002 +0000 UTC m=+2.003932871" lastFinishedPulling="2026-04-16 16:24:23.461114062 +0000 UTC m=+20.239960935" observedRunningTime="2026-04-16 16:24:24.098622718 +0000 UTC m=+20.877469611" watchObservedRunningTime="2026-04-16 16:24:24.099714806 +0000 UTC m=+20.878561698" Apr 16 16:24:24.905631 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:24.905400 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:24.905779 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:24.905399 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:24.905779 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:24.905660 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:24.905779 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:24.905709 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:25.059991 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.059939 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-7962v" event={"ID":"9adc281d-523f-4fc2-8784-22b5802e5ef5","Type":"ContainerStarted","Data":"8d5110686fef32d1cd4b50309a3639b55ae589567f2996605fe323f4c6a8a93d"} Apr 16 16:24:25.061637 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.061598 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" event={"ID":"0c4bc245-ba8c-4779-b28e-2628fba0297f","Type":"ContainerStarted","Data":"c972a4c0ab16da84c08ae22f29fbb70577dd13ab7f2820bfc013f77ce01c5640"} Apr 16 16:24:25.063140 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.063112 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xl6x7" event={"ID":"7b3cb7b5-891a-4226-b146-221600c3471c","Type":"ContainerStarted","Data":"15853552843f15fb4648898a28b984d0f16f4c88896489df3e65b17667f8f5f2"} Apr 16 16:24:25.066171 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.066144 2572 generic.go:358] "Generic (PLEG): container finished" podID="4d0d3535-86d0-4270-8087-38d613e5a0a5" containerID="9bf360f28b38e28383098cf05986cb984f829acf17414cd2d4b0f43b39f1ac26" exitCode=0 Apr 16 16:24:25.066310 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.066230 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbkft" event={"ID":"4d0d3535-86d0-4270-8087-38d613e5a0a5","Type":"ContainerDied","Data":"9bf360f28b38e28383098cf05986cb984f829acf17414cd2d4b0f43b39f1ac26"} Apr 16 16:24:25.068758 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.068724 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-42w7g" event={"ID":"387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d","Type":"ContainerStarted","Data":"ea93064a7c4bf0b78f8306bef5a42b9d3f9ee32b8f0e3c9095156db87c73afc3"} Apr 16 16:24:25.070095 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.070068 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-94pcx" event={"ID":"e894fdf2-0ea2-43a3-a40d-03eca2359199","Type":"ContainerStarted","Data":"1245a92b01623542bfe5abb985bc234ecc469efa0c7172141ed8e998836587b7"} Apr 16 16:24:25.071551 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.071528 2572 generic.go:358] "Generic (PLEG): container finished" podID="b93a143e50d6e55a1c399c4b395a32e8" containerID="130d6933fedb908820eba084c20e97c0d1503bc17c9476e69002ba6de35107b4" exitCode=0 Apr 16 16:24:25.071632 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.071601 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" event={"ID":"b93a143e50d6e55a1c399c4b395a32e8","Type":"ContainerDied","Data":"130d6933fedb908820eba084c20e97c0d1503bc17c9476e69002ba6de35107b4"} Apr 16 16:24:25.074710 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.074688 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" event={"ID":"6f3381a4-23e0-42e8-b782-d6d4e6915910","Type":"ContainerStarted","Data":"d846be436df191d211033d28cfd7224c01caa33613cf393dbb55007a2df0f99b"} Apr 16 16:24:25.074795 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.074719 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" event={"ID":"6f3381a4-23e0-42e8-b782-d6d4e6915910","Type":"ContainerStarted","Data":"b2f5286cb1730cbc917293b1122e832ae100eb31628ae40447c815b5b57a8447"} Apr 16 16:24:25.074795 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.074734 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" event={"ID":"6f3381a4-23e0-42e8-b782-d6d4e6915910","Type":"ContainerStarted","Data":"4f3755a06eea837ce012f3ce21c95929d183d820af66ef3619a15024063d788a"} Apr 16 16:24:25.074795 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.074747 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" event={"ID":"6f3381a4-23e0-42e8-b782-d6d4e6915910","Type":"ContainerStarted","Data":"02fb401b2750eedfc484417a6e43d39c40064218ac91c5761c5d2863428a7df8"} Apr 16 16:24:25.074795 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.074760 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" event={"ID":"6f3381a4-23e0-42e8-b782-d6d4e6915910","Type":"ContainerStarted","Data":"a95999fdfb375768139549559be4d61d3efc0705ecbbd95e0291400804f84dd0"} Apr 16 16:24:25.074795 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.074774 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" event={"ID":"6f3381a4-23e0-42e8-b782-d6d4e6915910","Type":"ContainerStarted","Data":"6378f2620a79b88a322815ac1d161cc4980c98dda9df5f75ea82e6c09b72b3c8"} Apr 16 16:24:25.082535 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.082498 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-173.ec2.internal" podStartSLOduration=21.082484459 podStartE2EDuration="21.082484459s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:24:24.126851553 +0000 UTC m=+20.905698441" watchObservedRunningTime="2026-04-16 16:24:25.082484459 +0000 UTC m=+21.861331352" Apr 16 16:24:25.083285 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.083236 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-7962v" podStartSLOduration=2.795666673 podStartE2EDuration="21.083225788s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="2026-04-16 16:24:05.177537587 +0000 UTC m=+1.956384456" lastFinishedPulling="2026-04-16 16:24:23.465096696 +0000 UTC m=+20.243943571" observedRunningTime="2026-04-16 16:24:25.082344065 +0000 UTC m=+21.861190958" watchObservedRunningTime="2026-04-16 16:24:25.083225788 +0000 UTC m=+21.862072679" Apr 16 16:24:25.106111 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.106071 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-42w7g" podStartSLOduration=2.833575594 podStartE2EDuration="21.106059019s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="2026-04-16 16:24:05.186661685 +0000 UTC m=+1.965508554" lastFinishedPulling="2026-04-16 16:24:23.459145105 +0000 UTC m=+20.237991979" observedRunningTime="2026-04-16 16:24:25.105858766 +0000 UTC m=+21.884705660" watchObservedRunningTime="2026-04-16 16:24:25.106059019 +0000 UTC m=+21.884905910" Apr 16 16:24:25.124465 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.124421 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xl6x7" podStartSLOduration=4.406763397 podStartE2EDuration="22.124407017s" podCreationTimestamp="2026-04-16 16:24:03 +0000 UTC" firstStartedPulling="2026-04-16 16:24:05.692233339 +0000 UTC m=+2.471080215" lastFinishedPulling="2026-04-16 16:24:23.409876948 +0000 UTC m=+20.188723835" observedRunningTime="2026-04-16 16:24:25.124098058 +0000 UTC m=+21.902944949" watchObservedRunningTime="2026-04-16 16:24:25.124407017 +0000 UTC m=+21.903253907" Apr 16 16:24:25.174212 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.173927 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-94pcx" podStartSLOduration=2.892061866 podStartE2EDuration="21.173908635s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="2026-04-16 16:24:05.177588348 +0000 UTC m=+1.956435220" lastFinishedPulling="2026-04-16 16:24:23.459435117 +0000 UTC m=+20.238281989" observedRunningTime="2026-04-16 16:24:25.173142566 +0000 UTC m=+21.951989457" watchObservedRunningTime="2026-04-16 16:24:25.173908635 +0000 UTC m=+21.952755530" Apr 16 16:24:25.556027 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.555997 2572 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 16 16:24:25.875992 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.875883 2572 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-16T16:24:25.556018869Z","UUID":"5276b15c-bd58-48d2-b684-de44c3ad4a75","Handler":null,"Name":"","Endpoint":""} Apr 16 16:24:25.879690 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.879664 2572 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 16 16:24:25.879690 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.879699 2572 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 16 16:24:25.905055 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:25.905026 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:25.905199 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:25.905134 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:26.079165 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:26.079125 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" event={"ID":"0c4bc245-ba8c-4779-b28e-2628fba0297f","Type":"ContainerStarted","Data":"176e0cf91b3b11e940e3bfd10496ee2759460e22ede1dcd650e77150991db70d"} Apr 16 16:24:26.905586 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:26.905497 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:26.905760 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:26.905499 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:26.905760 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:26.905642 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:26.905760 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:26.905697 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:27.022518 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:27.022480 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:27.023279 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:27.023240 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:27.082618 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:27.082580 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" event={"ID":"b93a143e50d6e55a1c399c4b395a32e8","Type":"ContainerStarted","Data":"e5e1abc7373c938adc6e0f467620cf410d3233ab74cb75d129146afd5b756ede"} Apr 16 16:24:27.086002 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:27.085973 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" event={"ID":"6f3381a4-23e0-42e8-b782-d6d4e6915910","Type":"ContainerStarted","Data":"3bde589745857f06e150ed2e1d58b3c2b8f071de0d049b1d9d42273ee8aac0e2"} Apr 16 16:24:27.088218 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:27.088194 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" event={"ID":"0c4bc245-ba8c-4779-b28e-2628fba0297f","Type":"ContainerStarted","Data":"d2a1b270fbc73a04d1c2f5ba9df3b46a0adb2023481ee82a30fa8fdb809e5fed"} Apr 16 16:24:27.102153 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:27.102097 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-173.ec2.internal" podStartSLOduration=23.102079982 podStartE2EDuration="23.102079982s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:24:27.101562487 +0000 UTC m=+23.880409381" watchObservedRunningTime="2026-04-16 16:24:27.102079982 +0000 UTC m=+23.880926873" Apr 16 16:24:27.120852 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:27.120797 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-f5rkn" podStartSLOduration=1.838674355 podStartE2EDuration="23.12078024s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="2026-04-16 16:24:05.177449101 +0000 UTC m=+1.956295970" lastFinishedPulling="2026-04-16 16:24:26.45955497 +0000 UTC m=+23.238401855" observedRunningTime="2026-04-16 16:24:27.120516975 +0000 UTC m=+23.899363867" watchObservedRunningTime="2026-04-16 16:24:27.12078024 +0000 UTC m=+23.899627133" Apr 16 16:24:27.904727 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:27.904646 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:27.904880 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:27.904802 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:28.089814 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:28.089787 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 16:24:28.904802 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:28.904729 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:28.904952 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:28.904732 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:28.904952 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:28.904841 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:28.905047 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:28.904948 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:29.904808 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:29.904582 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:29.905340 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:29.904836 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:30.094558 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:30.094527 2572 generic.go:358] "Generic (PLEG): container finished" podID="4d0d3535-86d0-4270-8087-38d613e5a0a5" containerID="e2a103e5e80c0905ae0fd1585b750a76fc3448b6cca8b3f824a03ec78af8e080" exitCode=0 Apr 16 16:24:30.094716 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:30.094614 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbkft" event={"ID":"4d0d3535-86d0-4270-8087-38d613e5a0a5","Type":"ContainerDied","Data":"e2a103e5e80c0905ae0fd1585b750a76fc3448b6cca8b3f824a03ec78af8e080"} Apr 16 16:24:30.097755 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:30.097721 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" event={"ID":"6f3381a4-23e0-42e8-b782-d6d4e6915910","Type":"ContainerStarted","Data":"59749fc31f15a7a61fa279f3018d5bd9ab80079434a0ba1157cf001ca1b84cb8"} Apr 16 16:24:30.097999 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:30.097981 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:30.098105 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:30.098064 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:30.112181 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:30.112161 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:30.112323 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:30.112220 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:30.178130 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:30.178037 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" podStartSLOduration=7.379678868 podStartE2EDuration="26.178022488s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="2026-04-16 16:24:05.216088188 +0000 UTC m=+1.994935057" lastFinishedPulling="2026-04-16 16:24:24.014431794 +0000 UTC m=+20.793278677" observedRunningTime="2026-04-16 16:24:30.177871966 +0000 UTC m=+26.956718869" watchObservedRunningTime="2026-04-16 16:24:30.178022488 +0000 UTC m=+26.956869379" Apr 16 16:24:30.905585 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:30.905558 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:30.905585 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:30.905579 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:30.906367 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:30.905695 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:30.906367 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:30.905816 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:31.101042 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:31.100733 2572 generic.go:358] "Generic (PLEG): container finished" podID="4d0d3535-86d0-4270-8087-38d613e5a0a5" containerID="3c753054aee532726ddbe92ca03c19c66be02fe367e9de749b0096f62f60322c" exitCode=0 Apr 16 16:24:31.101174 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:31.100815 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbkft" event={"ID":"4d0d3535-86d0-4270-8087-38d613e5a0a5","Type":"ContainerDied","Data":"3c753054aee532726ddbe92ca03c19c66be02fe367e9de749b0096f62f60322c"} Apr 16 16:24:31.101221 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:31.101184 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 16:24:31.185182 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:31.185152 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-225wk"] Apr 16 16:24:31.185332 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:31.185285 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:31.185376 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:31.185360 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:31.187444 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:31.187417 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-ngxj6"] Apr 16 16:24:31.187569 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:31.187509 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:31.187645 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:31.187613 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:31.188043 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:31.188020 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-29gd5"] Apr 16 16:24:31.188160 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:31.188104 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:31.188214 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:31.188188 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:32.104884 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:32.104850 2572 generic.go:358] "Generic (PLEG): container finished" podID="4d0d3535-86d0-4270-8087-38d613e5a0a5" containerID="2e9927a68f56c213f91f052c46e0419e562a4f34d560dd619d6105f2472a829f" exitCode=0 Apr 16 16:24:32.105272 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:32.104931 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbkft" event={"ID":"4d0d3535-86d0-4270-8087-38d613e5a0a5","Type":"ContainerDied","Data":"2e9927a68f56c213f91f052c46e0419e562a4f34d560dd619d6105f2472a829f"} Apr 16 16:24:32.105272 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:32.105146 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 16:24:32.355258 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:32.355133 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:24:32.905586 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:32.905550 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:32.905586 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:32.905578 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:32.905836 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:32.905501 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:32.905836 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:32.905711 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:32.905935 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:32.905868 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:32.906458 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:32.906426 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:33.328041 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:33.328012 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-xl6x7_7b3cb7b5-891a-4226-b146-221600c3471c/dns-node-resolver/0.log" Apr 16 16:24:33.909143 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:33.909117 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-42w7g_387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d/node-ca/0.log" Apr 16 16:24:34.905640 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:34.905227 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:34.905640 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:34.905295 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:34.905640 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:34.905262 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:34.905640 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:34.905408 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:34.905640 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:34.905511 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:34.905640 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:34.905605 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:35.163865 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:35.163769 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:35.164020 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:35.163940 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 16:24:35.164470 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:35.164436 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-94pcx" Apr 16 16:24:36.660965 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:36.660927 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:36.661524 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.661108 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:36.661524 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.661186 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs podName:5ec99558-e99b-4661-be0c-b68d311f226a nodeName:}" failed. No retries permitted until 2026-04-16 16:25:08.661166445 +0000 UTC m=+65.440013319 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs") pod "network-metrics-daemon-29gd5" (UID: "5ec99558-e99b-4661-be0c-b68d311f226a") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 16:24:36.761827 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:36.761781 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjgs\" (UniqueName: \"kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs\") pod \"network-check-target-225wk\" (UID: \"b3cf9670-2b3f-4b96-aff6-4c414454a507\") " pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:36.762015 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.761969 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 16:24:36.762015 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.761994 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 16:24:36.762015 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.762006 2572 projected.go:194] Error preparing data for projected volume kube-api-access-7vjgs for pod openshift-network-diagnostics/network-check-target-225wk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:36.762175 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.762071 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs podName:b3cf9670-2b3f-4b96-aff6-4c414454a507 nodeName:}" failed. No retries permitted until 2026-04-16 16:25:08.762050464 +0000 UTC m=+65.540897356 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-7vjgs" (UniqueName: "kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs") pod "network-check-target-225wk" (UID: "b3cf9670-2b3f-4b96-aff6-4c414454a507") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 16:24:36.904662 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:36.904628 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:36.904822 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:36.904632 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:36.904822 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.904765 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:36.904943 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.904869 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:36.904943 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:36.904649 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:36.905025 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.904969 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:36.963550 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:36.963506 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:36.963736 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.963632 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:36.963736 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:36.963711 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret podName:c1e6d4a3-5bed-4a4a-982e-2bc52481870a nodeName:}" failed. No retries permitted until 2026-04-16 16:25:08.963689304 +0000 UTC m=+65.742536184 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret") pod "global-pull-secret-syncer-ngxj6" (UID: "c1e6d4a3-5bed-4a4a-982e-2bc52481870a") : object "kube-system"/"original-pull-secret" not registered Apr 16 16:24:38.904788 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:38.904759 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:38.905188 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:38.904838 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:38.905188 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:38.904860 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:38.905188 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:38.904959 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:38.905188 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:38.904988 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:38.905188 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:38.905098 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:39.120875 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:39.120845 2572 generic.go:358] "Generic (PLEG): container finished" podID="4d0d3535-86d0-4270-8087-38d613e5a0a5" containerID="b5a30dffef0a4a5af6649500074e46f3ef67c35f9eb70215a07557f62c90678a" exitCode=0 Apr 16 16:24:39.121031 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:39.120912 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbkft" event={"ID":"4d0d3535-86d0-4270-8087-38d613e5a0a5","Type":"ContainerDied","Data":"b5a30dffef0a4a5af6649500074e46f3ef67c35f9eb70215a07557f62c90678a"} Apr 16 16:24:40.125750 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:40.125557 2572 generic.go:358] "Generic (PLEG): container finished" podID="4d0d3535-86d0-4270-8087-38d613e5a0a5" containerID="6dff4f7bdfbf5a0247135ce265c3c3449724251459e55189180d61d571cf963c" exitCode=0 Apr 16 16:24:40.126128 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:40.125635 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbkft" event={"ID":"4d0d3535-86d0-4270-8087-38d613e5a0a5","Type":"ContainerDied","Data":"6dff4f7bdfbf5a0247135ce265c3c3449724251459e55189180d61d571cf963c"} Apr 16 16:24:40.905016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:40.904983 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:40.905197 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:40.904990 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:40.905197 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:40.905090 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:40.905197 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:40.905180 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:40.905380 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:40.904990 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:40.905380 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:40.905276 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:41.130795 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:41.130708 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbkft" event={"ID":"4d0d3535-86d0-4270-8087-38d613e5a0a5","Type":"ContainerStarted","Data":"9bfa9d62a29ae50139ca6c250038f769af4ad3432c38f7116920a201176512d5"} Apr 16 16:24:42.904617 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:42.904586 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:42.905094 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:42.904586 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:42.905094 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:42.904698 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:42.905094 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:42.904753 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:42.905094 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:42.904596 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:42.905339 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:42.905166 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:44.905432 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:44.905401 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:44.905896 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:44.905509 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:44.905896 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:44.905521 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:44.905896 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:44.905582 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:44.905896 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:44.905623 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:44.905896 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:44.905680 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:46.905577 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:46.905534 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:46.905577 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:46.905568 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:46.905997 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:46.905645 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:46.905997 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:46.905682 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:46.905997 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:46.905749 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:46.905997 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:46.905810 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:48.905426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:48.905390 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:48.905804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:48.905390 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:48.905804 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:48.905500 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:48.905804 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:48.905410 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:48.905804 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:48.905558 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:48.905804 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:48.905660 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:50.905623 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:50.905583 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:50.906097 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:50.905726 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:50.906097 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:50.905740 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:50.906097 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:50.905724 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-225wk" podUID="b3cf9670-2b3f-4b96-aff6-4c414454a507" Apr 16 16:24:50.906097 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:50.905832 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29gd5" podUID="5ec99558-e99b-4661-be0c-b68d311f226a" Apr 16 16:24:50.906097 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:24:50.905891 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ngxj6" podUID="c1e6d4a3-5bed-4a4a-982e-2bc52481870a" Apr 16 16:24:51.502668 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.502640 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-173.ec2.internal" event="NodeReady" Apr 16 16:24:51.502837 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.502826 2572 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 16 16:24:51.550654 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.550597 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gbkft" podStartSLOduration=14.726236279 podStartE2EDuration="47.550583473s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="2026-04-16 16:24:05.229286717 +0000 UTC m=+2.008133586" lastFinishedPulling="2026-04-16 16:24:38.053633907 +0000 UTC m=+34.832480780" observedRunningTime="2026-04-16 16:24:41.173751481 +0000 UTC m=+37.952598371" watchObservedRunningTime="2026-04-16 16:24:51.550583473 +0000 UTC m=+48.329430363" Apr 16 16:24:51.551134 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.551113 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-856dbdf944-r9ssw"] Apr 16 16:24:51.566739 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.566714 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.570203 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.570181 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-nvqpx\"" Apr 16 16:24:51.570351 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.570214 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 16 16:24:51.570397 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.570357 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 16 16:24:51.570450 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.570436 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 16 16:24:51.574624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.574035 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dd857a2c-74f8-404b-8c76-8b15616fb405-registry-certificates\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.574624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.574096 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dd857a2c-74f8-404b-8c76-8b15616fb405-registry-tls\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.574624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.574121 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd857a2c-74f8-404b-8c76-8b15616fb405-trusted-ca\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.574624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.574158 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd857a2c-74f8-404b-8c76-8b15616fb405-bound-sa-token\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.574624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.574184 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wczzr\" (UniqueName: \"kubernetes.io/projected/dd857a2c-74f8-404b-8c76-8b15616fb405-kube-api-access-wczzr\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.574624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.574227 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/dd857a2c-74f8-404b-8c76-8b15616fb405-image-registry-private-configuration\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.574624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.574294 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dd857a2c-74f8-404b-8c76-8b15616fb405-ca-trust-extracted\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.574624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.574319 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dd857a2c-74f8-404b-8c76-8b15616fb405-installation-pull-secrets\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.575143 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.575093 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-zn7jk"] Apr 16 16:24:51.576769 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.576749 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 16 16:24:51.593557 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.593532 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-856dbdf944-r9ssw"] Apr 16 16:24:51.593557 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.593561 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-z46sx"] Apr 16 16:24:51.593767 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.593711 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.597460 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.597439 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 16 16:24:51.597599 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.597492 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 16 16:24:51.597754 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.597738 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 16 16:24:51.598124 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.598109 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 16 16:24:51.598540 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.598524 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-dpxrk\"" Apr 16 16:24:51.609832 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.609772 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-zn7jk"] Apr 16 16:24:51.609961 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.609837 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.611506 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.611488 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z46sx"] Apr 16 16:24:51.613936 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.613919 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-psw85\"" Apr 16 16:24:51.617718 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.617702 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 16 16:24:51.618312 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.618290 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 16 16:24:51.675142 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675107 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tnlt\" (UniqueName: \"kubernetes.io/projected/0def5104-1269-46d2-8b59-88b426ff3a84-kube-api-access-4tnlt\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.675142 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675149 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dd857a2c-74f8-404b-8c76-8b15616fb405-registry-tls\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.675430 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675168 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd857a2c-74f8-404b-8c76-8b15616fb405-trusted-ca\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.675430 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675198 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd857a2c-74f8-404b-8c76-8b15616fb405-bound-sa-token\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.675430 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675221 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wczzr\" (UniqueName: \"kubernetes.io/projected/dd857a2c-74f8-404b-8c76-8b15616fb405-kube-api-access-wczzr\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.675430 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675260 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/486fb98c-ba32-4f4a-b74b-77594655b680-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.675430 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675306 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/dd857a2c-74f8-404b-8c76-8b15616fb405-image-registry-private-configuration\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.675430 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675332 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/486fb98c-ba32-4f4a-b74b-77594655b680-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.675430 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675393 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dd857a2c-74f8-404b-8c76-8b15616fb405-ca-trust-extracted\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.675430 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675420 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dd857a2c-74f8-404b-8c76-8b15616fb405-installation-pull-secrets\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.675879 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675564 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0def5104-1269-46d2-8b59-88b426ff3a84-tmp-dir\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.675879 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675596 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0def5104-1269-46d2-8b59-88b426ff3a84-metrics-tls\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.675879 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675630 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0def5104-1269-46d2-8b59-88b426ff3a84-config-volume\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.675879 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675653 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/486fb98c-ba32-4f4a-b74b-77594655b680-crio-socket\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.675879 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675730 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dd857a2c-74f8-404b-8c76-8b15616fb405-registry-certificates\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.675879 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675769 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/486fb98c-ba32-4f4a-b74b-77594655b680-data-volume\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.675879 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675797 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdmf7\" (UniqueName: \"kubernetes.io/projected/486fb98c-ba32-4f4a-b74b-77594655b680-kube-api-access-fdmf7\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.675879 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.675862 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dd857a2c-74f8-404b-8c76-8b15616fb405-ca-trust-extracted\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.676293 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.676275 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd857a2c-74f8-404b-8c76-8b15616fb405-trusted-ca\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.676533 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.676500 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dd857a2c-74f8-404b-8c76-8b15616fb405-registry-certificates\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.679466 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.679439 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/dd857a2c-74f8-404b-8c76-8b15616fb405-image-registry-private-configuration\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.679585 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.679473 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dd857a2c-74f8-404b-8c76-8b15616fb405-registry-tls\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.679585 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.679486 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dd857a2c-74f8-404b-8c76-8b15616fb405-installation-pull-secrets\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.700151 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.700122 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-czxkg"] Apr 16 16:24:51.704971 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.704941 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd857a2c-74f8-404b-8c76-8b15616fb405-bound-sa-token\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.717660 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.717627 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wczzr\" (UniqueName: \"kubernetes.io/projected/dd857a2c-74f8-404b-8c76-8b15616fb405-kube-api-access-wczzr\") pod \"image-registry-856dbdf944-r9ssw\" (UID: \"dd857a2c-74f8-404b-8c76-8b15616fb405\") " pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.717789 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.717750 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-czxkg" Apr 16 16:24:51.722533 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.722511 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 16 16:24:51.723665 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.723407 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 16 16:24:51.723665 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.723595 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 16 16:24:51.726639 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.726112 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-hp8vh\"" Apr 16 16:24:51.726639 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.726131 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-czxkg"] Apr 16 16:24:51.776676 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.776643 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/486fb98c-ba32-4f4a-b74b-77594655b680-data-volume\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.776676 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.776678 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fdmf7\" (UniqueName: \"kubernetes.io/projected/486fb98c-ba32-4f4a-b74b-77594655b680-kube-api-access-fdmf7\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.776923 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.776700 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/424c900c-278c-4d36-8442-3875c8baf989-cert\") pod \"ingress-canary-czxkg\" (UID: \"424c900c-278c-4d36-8442-3875c8baf989\") " pod="openshift-ingress-canary/ingress-canary-czxkg" Apr 16 16:24:51.776923 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.776723 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4tnlt\" (UniqueName: \"kubernetes.io/projected/0def5104-1269-46d2-8b59-88b426ff3a84-kube-api-access-4tnlt\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.776923 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.776763 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/486fb98c-ba32-4f4a-b74b-77594655b680-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.776923 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.776780 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dlrl\" (UniqueName: \"kubernetes.io/projected/424c900c-278c-4d36-8442-3875c8baf989-kube-api-access-6dlrl\") pod \"ingress-canary-czxkg\" (UID: \"424c900c-278c-4d36-8442-3875c8baf989\") " pod="openshift-ingress-canary/ingress-canary-czxkg" Apr 16 16:24:51.776923 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.776893 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/486fb98c-ba32-4f4a-b74b-77594655b680-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.777166 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.776958 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0def5104-1269-46d2-8b59-88b426ff3a84-tmp-dir\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.777166 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.776990 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0def5104-1269-46d2-8b59-88b426ff3a84-metrics-tls\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.777166 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.777018 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0def5104-1269-46d2-8b59-88b426ff3a84-config-volume\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.777166 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.777042 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/486fb98c-ba32-4f4a-b74b-77594655b680-crio-socket\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.777385 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.777168 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/486fb98c-ba32-4f4a-b74b-77594655b680-data-volume\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.777385 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.777270 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/486fb98c-ba32-4f4a-b74b-77594655b680-crio-socket\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.777385 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.777336 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0def5104-1269-46d2-8b59-88b426ff3a84-tmp-dir\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.777481 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.777459 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/486fb98c-ba32-4f4a-b74b-77594655b680-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.777657 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.777636 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0def5104-1269-46d2-8b59-88b426ff3a84-config-volume\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.779356 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.779333 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0def5104-1269-46d2-8b59-88b426ff3a84-metrics-tls\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.787306 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.787283 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/486fb98c-ba32-4f4a-b74b-77594655b680-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.811375 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.811338 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdmf7\" (UniqueName: \"kubernetes.io/projected/486fb98c-ba32-4f4a-b74b-77594655b680-kube-api-access-fdmf7\") pod \"insights-runtime-extractor-zn7jk\" (UID: \"486fb98c-ba32-4f4a-b74b-77594655b680\") " pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.813959 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.813942 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tnlt\" (UniqueName: \"kubernetes.io/projected/0def5104-1269-46d2-8b59-88b426ff3a84-kube-api-access-4tnlt\") pod \"dns-default-z46sx\" (UID: \"0def5104-1269-46d2-8b59-88b426ff3a84\") " pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:51.877483 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.877396 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:51.877615 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.877519 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/424c900c-278c-4d36-8442-3875c8baf989-cert\") pod \"ingress-canary-czxkg\" (UID: \"424c900c-278c-4d36-8442-3875c8baf989\") " pod="openshift-ingress-canary/ingress-canary-czxkg" Apr 16 16:24:51.877615 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.877571 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6dlrl\" (UniqueName: \"kubernetes.io/projected/424c900c-278c-4d36-8442-3875c8baf989-kube-api-access-6dlrl\") pod \"ingress-canary-czxkg\" (UID: \"424c900c-278c-4d36-8442-3875c8baf989\") " pod="openshift-ingress-canary/ingress-canary-czxkg" Apr 16 16:24:51.880049 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.880024 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/424c900c-278c-4d36-8442-3875c8baf989-cert\") pod \"ingress-canary-czxkg\" (UID: \"424c900c-278c-4d36-8442-3875c8baf989\") " pod="openshift-ingress-canary/ingress-canary-czxkg" Apr 16 16:24:51.902881 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.902852 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-zn7jk" Apr 16 16:24:51.905034 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.905003 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dlrl\" (UniqueName: \"kubernetes.io/projected/424c900c-278c-4d36-8442-3875c8baf989-kube-api-access-6dlrl\") pod \"ingress-canary-czxkg\" (UID: \"424c900c-278c-4d36-8442-3875c8baf989\") " pod="openshift-ingress-canary/ingress-canary-czxkg" Apr 16 16:24:51.919175 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:51.919129 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:52.027855 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.027793 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-czxkg" Apr 16 16:24:52.090474 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.089861 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-zn7jk"] Apr 16 16:24:52.120354 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.120329 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z46sx"] Apr 16 16:24:52.123019 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.122757 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-856dbdf944-r9ssw"] Apr 16 16:24:52.125061 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:52.125027 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0def5104_1269_46d2_8b59_88b426ff3a84.slice/crio-bf717818597dc08eae9b789fc338ad1e49e8d84ca2d67528ff79c3af37bce2a8 WatchSource:0}: Error finding container bf717818597dc08eae9b789fc338ad1e49e8d84ca2d67528ff79c3af37bce2a8: Status 404 returned error can't find the container with id bf717818597dc08eae9b789fc338ad1e49e8d84ca2d67528ff79c3af37bce2a8 Apr 16 16:24:52.126539 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:52.126520 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd857a2c_74f8_404b_8c76_8b15616fb405.slice/crio-94ba4637be925048f70d0413ba892478365358076cf693b33937c1f83e4bbfe6 WatchSource:0}: Error finding container 94ba4637be925048f70d0413ba892478365358076cf693b33937c1f83e4bbfe6: Status 404 returned error can't find the container with id 94ba4637be925048f70d0413ba892478365358076cf693b33937c1f83e4bbfe6 Apr 16 16:24:52.153136 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.153107 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z46sx" event={"ID":"0def5104-1269-46d2-8b59-88b426ff3a84","Type":"ContainerStarted","Data":"bf717818597dc08eae9b789fc338ad1e49e8d84ca2d67528ff79c3af37bce2a8"} Apr 16 16:24:52.154133 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.154104 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-zn7jk" event={"ID":"486fb98c-ba32-4f4a-b74b-77594655b680","Type":"ContainerStarted","Data":"fbf2c935bded64c1cb2947625557ad582b029bc10239bc4ccfae9409bc64d238"} Apr 16 16:24:52.155188 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.155164 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" event={"ID":"dd857a2c-74f8-404b-8c76-8b15616fb405","Type":"ContainerStarted","Data":"94ba4637be925048f70d0413ba892478365358076cf693b33937c1f83e4bbfe6"} Apr 16 16:24:52.194104 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.194080 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-czxkg"] Apr 16 16:24:52.195437 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:24:52.195412 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod424c900c_278c_4d36_8442_3875c8baf989.slice/crio-45ed74531fa598155bfc77f661486808aad1a76a64a127a962d1228878a6407a WatchSource:0}: Error finding container 45ed74531fa598155bfc77f661486808aad1a76a64a127a962d1228878a6407a: Status 404 returned error can't find the container with id 45ed74531fa598155bfc77f661486808aad1a76a64a127a962d1228878a6407a Apr 16 16:24:52.905084 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.905051 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:24:52.905369 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.905052 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:24:52.905369 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.905052 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:24:52.909823 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.909799 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 16 16:24:52.909956 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.909830 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 16:24:52.909956 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.909919 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-w5tzs\"" Apr 16 16:24:52.910838 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.910807 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 16:24:52.910954 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.910898 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 16:24:52.911049 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:52.911024 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-f6w5w\"" Apr 16 16:24:53.158957 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:53.158848 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-czxkg" event={"ID":"424c900c-278c-4d36-8442-3875c8baf989","Type":"ContainerStarted","Data":"45ed74531fa598155bfc77f661486808aad1a76a64a127a962d1228878a6407a"} Apr 16 16:24:53.160493 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:53.160447 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-zn7jk" event={"ID":"486fb98c-ba32-4f4a-b74b-77594655b680","Type":"ContainerStarted","Data":"aa0adccde5048dc0837f193ae20c44f70b4701e5dfbfb3abc1fc529344fbd703"} Apr 16 16:24:53.162021 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:53.161982 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" event={"ID":"dd857a2c-74f8-404b-8c76-8b15616fb405","Type":"ContainerStarted","Data":"994e6b92ecab37ccf7fceeda321d4d774dc10f3138904c9593edd8a2770cbd13"} Apr 16 16:24:53.162210 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:53.162191 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:24:53.937878 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:53.937814 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" podStartSLOduration=2.937795697 podStartE2EDuration="2.937795697s" podCreationTimestamp="2026-04-16 16:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:24:53.254197211 +0000 UTC m=+50.033044101" watchObservedRunningTime="2026-04-16 16:24:53.937795697 +0000 UTC m=+50.716642591" Apr 16 16:24:55.167980 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:55.167946 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-zn7jk" event={"ID":"486fb98c-ba32-4f4a-b74b-77594655b680","Type":"ContainerStarted","Data":"5c566898bfd71778f689e51d8806f7fac9a7d00be345312001a65c5c1c8a2dc4"} Apr 16 16:24:55.169130 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:55.169102 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z46sx" event={"ID":"0def5104-1269-46d2-8b59-88b426ff3a84","Type":"ContainerStarted","Data":"2b374d9b8e81276a202cc7b8ba7b4467a83b8e822845523b210ad23d0a308d3e"} Apr 16 16:24:55.170219 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:55.170198 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-czxkg" event={"ID":"424c900c-278c-4d36-8442-3875c8baf989","Type":"ContainerStarted","Data":"dd792d64faa7a7e581e3c0eb50b19c1de681bd278ccfb08f03c4fe57de02bfc7"} Apr 16 16:24:55.188220 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:55.188166 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-czxkg" podStartSLOduration=1.4994498809999999 podStartE2EDuration="4.188148143s" podCreationTimestamp="2026-04-16 16:24:51 +0000 UTC" firstStartedPulling="2026-04-16 16:24:52.197292076 +0000 UTC m=+48.976138945" lastFinishedPulling="2026-04-16 16:24:54.885990335 +0000 UTC m=+51.664837207" observedRunningTime="2026-04-16 16:24:55.186997845 +0000 UTC m=+51.965844737" watchObservedRunningTime="2026-04-16 16:24:55.188148143 +0000 UTC m=+51.966995035" Apr 16 16:24:56.174017 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:56.173977 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z46sx" event={"ID":"0def5104-1269-46d2-8b59-88b426ff3a84","Type":"ContainerStarted","Data":"23a25e5f11e7c0a7674c81f90292ef0b83f69b116e39680bbe91f6d489af55a1"} Apr 16 16:24:56.198431 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:56.198366 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-z46sx" podStartSLOduration=2.444256527 podStartE2EDuration="5.198343743s" podCreationTimestamp="2026-04-16 16:24:51 +0000 UTC" firstStartedPulling="2026-04-16 16:24:52.126899464 +0000 UTC m=+48.905746333" lastFinishedPulling="2026-04-16 16:24:54.880986667 +0000 UTC m=+51.659833549" observedRunningTime="2026-04-16 16:24:56.19560845 +0000 UTC m=+52.974455342" watchObservedRunningTime="2026-04-16 16:24:56.198343743 +0000 UTC m=+52.977190634" Apr 16 16:24:57.178678 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:57.178635 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-zn7jk" event={"ID":"486fb98c-ba32-4f4a-b74b-77594655b680","Type":"ContainerStarted","Data":"ac94f86e7eac3e8a962d90969cdcf74502cb65e0861113466623643a2dcbf433"} Apr 16 16:24:57.179146 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:57.178823 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-z46sx" Apr 16 16:24:57.217259 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:24:57.217188 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-zn7jk" podStartSLOduration=1.8128641060000001 podStartE2EDuration="6.217175191s" podCreationTimestamp="2026-04-16 16:24:51 +0000 UTC" firstStartedPulling="2026-04-16 16:24:52.231738297 +0000 UTC m=+49.010585183" lastFinishedPulling="2026-04-16 16:24:56.636049397 +0000 UTC m=+53.414896268" observedRunningTime="2026-04-16 16:24:57.216466566 +0000 UTC m=+53.995313480" watchObservedRunningTime="2026-04-16 16:24:57.217175191 +0000 UTC m=+53.996022082" Apr 16 16:25:03.117664 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:03.117635 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-csjjh" Apr 16 16:25:03.798084 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:03.798053 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65"] Apr 16 16:25:03.803057 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:03.803034 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65" Apr 16 16:25:03.808790 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:03.808767 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-tls\"" Apr 16 16:25:03.809940 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:03.809919 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-dockercfg-g75qv\"" Apr 16 16:25:03.816745 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:03.816721 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65"] Apr 16 16:25:03.897917 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:03.897886 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2205f8f0-ea2d-4582-acb9-994bc6c9921b-tls-certificates\") pod \"prometheus-operator-admission-webhook-9cb97cd87-sck65\" (UID: \"2205f8f0-ea2d-4582-acb9-994bc6c9921b\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65" Apr 16 16:25:03.999147 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:03.999098 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2205f8f0-ea2d-4582-acb9-994bc6c9921b-tls-certificates\") pod \"prometheus-operator-admission-webhook-9cb97cd87-sck65\" (UID: \"2205f8f0-ea2d-4582-acb9-994bc6c9921b\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65" Apr 16 16:25:04.001575 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:04.001549 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2205f8f0-ea2d-4582-acb9-994bc6c9921b-tls-certificates\") pod \"prometheus-operator-admission-webhook-9cb97cd87-sck65\" (UID: \"2205f8f0-ea2d-4582-acb9-994bc6c9921b\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65" Apr 16 16:25:04.112182 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:04.112095 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65" Apr 16 16:25:04.236016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:04.235926 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65"] Apr 16 16:25:04.238400 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:04.238369 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2205f8f0_ea2d_4582_acb9_994bc6c9921b.slice/crio-4f34adc8d01f5af58f92e9270a11d804679607789dee2d2d938985af29fa3e4f WatchSource:0}: Error finding container 4f34adc8d01f5af58f92e9270a11d804679607789dee2d2d938985af29fa3e4f: Status 404 returned error can't find the container with id 4f34adc8d01f5af58f92e9270a11d804679607789dee2d2d938985af29fa3e4f Apr 16 16:25:05.201648 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:05.201614 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65" event={"ID":"2205f8f0-ea2d-4582-acb9-994bc6c9921b","Type":"ContainerStarted","Data":"4f34adc8d01f5af58f92e9270a11d804679607789dee2d2d938985af29fa3e4f"} Apr 16 16:25:07.184821 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:07.184792 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-z46sx" Apr 16 16:25:07.209708 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:07.209677 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65" event={"ID":"2205f8f0-ea2d-4582-acb9-994bc6c9921b","Type":"ContainerStarted","Data":"7f16abcd370b5d283615e01a47b5b07d19432c9a1fc21ea3c5be1e8c11c9c15c"} Apr 16 16:25:07.209904 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:07.209880 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65" Apr 16 16:25:07.214784 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:07.214759 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65" Apr 16 16:25:07.232775 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:07.232723 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-sck65" podStartSLOduration=1.986791859 podStartE2EDuration="4.232706393s" podCreationTimestamp="2026-04-16 16:25:03 +0000 UTC" firstStartedPulling="2026-04-16 16:25:04.240322331 +0000 UTC m=+61.019169201" lastFinishedPulling="2026-04-16 16:25:06.486236862 +0000 UTC m=+63.265083735" observedRunningTime="2026-04-16 16:25:07.231236343 +0000 UTC m=+64.010083235" watchObservedRunningTime="2026-04-16 16:25:07.232706393 +0000 UTC m=+64.011553285" Apr 16 16:25:08.736492 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:08.736445 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:25:08.739532 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:08.739510 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 16:25:08.748994 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:08.748974 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ec99558-e99b-4661-be0c-b68d311f226a-metrics-certs\") pod \"network-metrics-daemon-29gd5\" (UID: \"5ec99558-e99b-4661-be0c-b68d311f226a\") " pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:25:08.832958 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:08.832931 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-f6w5w\"" Apr 16 16:25:08.836876 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:08.836855 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjgs\" (UniqueName: \"kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs\") pod \"network-check-target-225wk\" (UID: \"b3cf9670-2b3f-4b96-aff6-4c414454a507\") " pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:25:08.840289 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:08.840255 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29gd5" Apr 16 16:25:08.840567 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:08.840553 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 16:25:08.854437 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:08.854405 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 16:25:08.860361 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:08.860334 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjgs\" (UniqueName: \"kubernetes.io/projected/b3cf9670-2b3f-4b96-aff6-4c414454a507-kube-api-access-7vjgs\") pod \"network-check-target-225wk\" (UID: \"b3cf9670-2b3f-4b96-aff6-4c414454a507\") " pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:25:08.965206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:08.965172 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-29gd5"] Apr 16 16:25:08.968093 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:08.968059 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ec99558_e99b_4661_be0c_b68d311f226a.slice/crio-ce8e2fd57eca635c0f175bc65e2bfffe191911f6c0c5132bd78e056428a3cb88 WatchSource:0}: Error finding container ce8e2fd57eca635c0f175bc65e2bfffe191911f6c0c5132bd78e056428a3cb88: Status 404 returned error can't find the container with id ce8e2fd57eca635c0f175bc65e2bfffe191911f6c0c5132bd78e056428a3cb88 Apr 16 16:25:09.038310 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:09.038206 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:25:09.041577 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:09.041560 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 16 16:25:09.050933 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:09.050909 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/c1e6d4a3-5bed-4a4a-982e-2bc52481870a-original-pull-secret\") pod \"global-pull-secret-syncer-ngxj6\" (UID: \"c1e6d4a3-5bed-4a4a-982e-2bc52481870a\") " pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:25:09.121347 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:09.121316 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-w5tzs\"" Apr 16 16:25:09.123664 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:09.123645 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ngxj6" Apr 16 16:25:09.127309 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:09.127290 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:25:09.216888 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:09.216782 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-29gd5" event={"ID":"5ec99558-e99b-4661-be0c-b68d311f226a","Type":"ContainerStarted","Data":"ce8e2fd57eca635c0f175bc65e2bfffe191911f6c0c5132bd78e056428a3cb88"} Apr 16 16:25:09.260410 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:09.260379 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-ngxj6"] Apr 16 16:25:09.263336 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:09.263313 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1e6d4a3_5bed_4a4a_982e_2bc52481870a.slice/crio-e18cc6e10ef1521f0f20d186eafb712bfdb1f087eeb2c08f51c4acba9c9e7849 WatchSource:0}: Error finding container e18cc6e10ef1521f0f20d186eafb712bfdb1f087eeb2c08f51c4acba9c9e7849: Status 404 returned error can't find the container with id e18cc6e10ef1521f0f20d186eafb712bfdb1f087eeb2c08f51c4acba9c9e7849 Apr 16 16:25:09.274004 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:09.273977 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-225wk"] Apr 16 16:25:09.277549 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:09.277526 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3cf9670_2b3f_4b96_aff6_4c414454a507.slice/crio-44438c81b6cdf39f0f252ca2525680be4b698f033b42a6f905041b94c08019ca WatchSource:0}: Error finding container 44438c81b6cdf39f0f252ca2525680be4b698f033b42a6f905041b94c08019ca: Status 404 returned error can't find the container with id 44438c81b6cdf39f0f252ca2525680be4b698f033b42a6f905041b94c08019ca Apr 16 16:25:10.220618 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:10.220579 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-ngxj6" event={"ID":"c1e6d4a3-5bed-4a4a-982e-2bc52481870a","Type":"ContainerStarted","Data":"e18cc6e10ef1521f0f20d186eafb712bfdb1f087eeb2c08f51c4acba9c9e7849"} Apr 16 16:25:10.221801 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:10.221775 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-225wk" event={"ID":"b3cf9670-2b3f-4b96-aff6-4c414454a507","Type":"ContainerStarted","Data":"44438c81b6cdf39f0f252ca2525680be4b698f033b42a6f905041b94c08019ca"} Apr 16 16:25:11.226393 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:11.226353 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-29gd5" event={"ID":"5ec99558-e99b-4661-be0c-b68d311f226a","Type":"ContainerStarted","Data":"5dae5bbaec1b880a5929d96b1e1917f3948dfc4e5b1da7279796b6fe0f26f880"} Apr 16 16:25:11.226899 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:11.226401 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-29gd5" event={"ID":"5ec99558-e99b-4661-be0c-b68d311f226a","Type":"ContainerStarted","Data":"af559fd60032a509afef6ea94a8c7997f2f0a5da294d03a5eb6ccb57583f0096"} Apr 16 16:25:11.883668 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:11.883595 2572 patch_prober.go:28] interesting pod/image-registry-856dbdf944-r9ssw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 16 16:25:11.883875 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:11.883660 2572 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" podUID="dd857a2c-74f8-404b-8c76-8b15616fb405" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:25:13.233986 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:13.233947 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-225wk" event={"ID":"b3cf9670-2b3f-4b96-aff6-4c414454a507","Type":"ContainerStarted","Data":"886f8c005dd74cf3862ddbc6393fe466df035a75a0997454fe8d6f591f3d503e"} Apr 16 16:25:13.234481 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:13.234080 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:25:13.260862 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:13.260802 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-29gd5" podStartSLOduration=67.826927955 podStartE2EDuration="1m9.260780614s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="2026-04-16 16:25:08.969955855 +0000 UTC m=+65.748802724" lastFinishedPulling="2026-04-16 16:25:10.403808507 +0000 UTC m=+67.182655383" observedRunningTime="2026-04-16 16:25:11.253784778 +0000 UTC m=+68.032631670" watchObservedRunningTime="2026-04-16 16:25:13.260780614 +0000 UTC m=+70.039627506" Apr 16 16:25:13.261054 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:13.260918 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-225wk" podStartSLOduration=66.116727008 podStartE2EDuration="1m9.260912373s" podCreationTimestamp="2026-04-16 16:24:04 +0000 UTC" firstStartedPulling="2026-04-16 16:25:09.279330997 +0000 UTC m=+66.058177866" lastFinishedPulling="2026-04-16 16:25:12.423516355 +0000 UTC m=+69.202363231" observedRunningTime="2026-04-16 16:25:13.258049475 +0000 UTC m=+70.036896368" watchObservedRunningTime="2026-04-16 16:25:13.260912373 +0000 UTC m=+70.039759266" Apr 16 16:25:14.168618 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:14.168545 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-856dbdf944-r9ssw" Apr 16 16:25:14.238404 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:14.238366 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-ngxj6" event={"ID":"c1e6d4a3-5bed-4a4a-982e-2bc52481870a","Type":"ContainerStarted","Data":"57c44610455d1b7347a2b99225369f51c3aa45d0e4ab4410c9b333d2ff131488"} Apr 16 16:25:14.275502 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:14.275450 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-ngxj6" podStartSLOduration=64.707590147 podStartE2EDuration="1m9.275433964s" podCreationTimestamp="2026-04-16 16:24:05 +0000 UTC" firstStartedPulling="2026-04-16 16:25:09.265025795 +0000 UTC m=+66.043872663" lastFinishedPulling="2026-04-16 16:25:13.83286961 +0000 UTC m=+70.611716480" observedRunningTime="2026-04-16 16:25:14.273977682 +0000 UTC m=+71.052824570" watchObservedRunningTime="2026-04-16 16:25:14.275433964 +0000 UTC m=+71.054280898" Apr 16 16:25:16.394476 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.394444 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-zkjg4"] Apr 16 16:25:16.397434 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.397418 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.402125 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.402101 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 16 16:25:16.403156 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.403136 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 16 16:25:16.403405 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.403387 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 16 16:25:16.403944 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.403618 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-vdxn6\"" Apr 16 16:25:16.404139 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.404108 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 16 16:25:16.404303 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.404174 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 16 16:25:16.404442 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.404425 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 16 16:25:16.493670 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.493633 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-textfile\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.493857 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.493683 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-tls\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.493857 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.493705 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c646ba1a-2ec3-42be-ad8a-73615ad5d640-metrics-client-ca\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.493857 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.493741 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-accelerators-collector-config\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.493857 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.493767 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-wtmp\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.494052 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.493867 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqwdl\" (UniqueName: \"kubernetes.io/projected/c646ba1a-2ec3-42be-ad8a-73615ad5d640-kube-api-access-wqwdl\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.494052 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.493909 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c646ba1a-2ec3-42be-ad8a-73615ad5d640-root\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.494052 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.493939 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.494052 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.493974 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c646ba1a-2ec3-42be-ad8a-73615ad5d640-sys\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595037 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595001 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595037 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595042 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c646ba1a-2ec3-42be-ad8a-73615ad5d640-sys\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595315 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595076 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-textfile\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595315 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595110 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-tls\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595315 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595134 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c646ba1a-2ec3-42be-ad8a-73615ad5d640-sys\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595315 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595138 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c646ba1a-2ec3-42be-ad8a-73615ad5d640-metrics-client-ca\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595315 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595213 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-accelerators-collector-config\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595315 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595283 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-wtmp\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595608 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595337 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqwdl\" (UniqueName: \"kubernetes.io/projected/c646ba1a-2ec3-42be-ad8a-73615ad5d640-kube-api-access-wqwdl\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595608 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595360 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c646ba1a-2ec3-42be-ad8a-73615ad5d640-root\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595608 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595437 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c646ba1a-2ec3-42be-ad8a-73615ad5d640-root\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595608 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595509 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-textfile\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595608 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595530 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-wtmp\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595853 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595814 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c646ba1a-2ec3-42be-ad8a-73615ad5d640-metrics-client-ca\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.595906 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.595848 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-accelerators-collector-config\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.597525 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.597502 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.597649 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.597629 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c646ba1a-2ec3-42be-ad8a-73615ad5d640-node-exporter-tls\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.638972 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.638945 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqwdl\" (UniqueName: \"kubernetes.io/projected/c646ba1a-2ec3-42be-ad8a-73615ad5d640-kube-api-access-wqwdl\") pod \"node-exporter-zkjg4\" (UID: \"c646ba1a-2ec3-42be-ad8a-73615ad5d640\") " pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.706230 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:16.706182 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-zkjg4" Apr 16 16:25:16.714948 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:16.714922 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc646ba1a_2ec3_42be_ad8a_73615ad5d640.slice/crio-1e4e51960982ebcc3725951fa849af7d3fe97a47b3838fbb1a06317dce26b8d4 WatchSource:0}: Error finding container 1e4e51960982ebcc3725951fa849af7d3fe97a47b3838fbb1a06317dce26b8d4: Status 404 returned error can't find the container with id 1e4e51960982ebcc3725951fa849af7d3fe97a47b3838fbb1a06317dce26b8d4 Apr 16 16:25:17.248910 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.248869 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zkjg4" event={"ID":"c646ba1a-2ec3-42be-ad8a-73615ad5d640","Type":"ContainerStarted","Data":"1e4e51960982ebcc3725951fa849af7d3fe97a47b3838fbb1a06317dce26b8d4"} Apr 16 16:25:17.408771 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.408716 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 16:25:17.412549 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.412523 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.417297 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.416976 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 16 16:25:17.417297 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.416977 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 16 16:25:17.417297 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.417104 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 16 16:25:17.417297 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.416978 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 16 16:25:17.417297 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.417031 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 16 16:25:17.417297 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.417105 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 16 16:25:17.417676 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.417422 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 16 16:25:17.417676 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.417547 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 16 16:25:17.421820 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.421798 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-cv45q\"" Apr 16 16:25:17.422803 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.422780 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 16 16:25:17.434530 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.434502 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 16:25:17.500987 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.500901 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtz2p\" (UniqueName: \"kubernetes.io/projected/6b148346-fcbc-424a-9f33-775948eaf93c-kube-api-access-mtz2p\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.500987 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.500942 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6b148346-fcbc-424a-9f33-775948eaf93c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501202 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501020 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501202 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501065 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-web-config\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501202 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501092 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501202 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501123 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6b148346-fcbc-424a-9f33-775948eaf93c-config-out\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501202 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501199 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6b148346-fcbc-424a-9f33-775948eaf93c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501420 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501233 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-config-volume\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501420 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501283 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b148346-fcbc-424a-9f33-775948eaf93c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501420 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501313 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501420 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501335 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501420 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501365 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b148346-fcbc-424a-9f33-775948eaf93c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.501579 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.501421 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602260 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602183 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6b148346-fcbc-424a-9f33-775948eaf93c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602260 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602263 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602477 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602295 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-web-config\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602477 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602320 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602477 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602342 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6b148346-fcbc-424a-9f33-775948eaf93c-config-out\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602477 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602386 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6b148346-fcbc-424a-9f33-775948eaf93c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602477 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602413 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-config-volume\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602734 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602646 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6b148346-fcbc-424a-9f33-775948eaf93c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602792 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602779 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b148346-fcbc-424a-9f33-775948eaf93c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602841 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602819 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602893 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602855 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602893 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602888 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b148346-fcbc-424a-9f33-775948eaf93c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602990 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602926 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.602990 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.602959 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mtz2p\" (UniqueName: \"kubernetes.io/projected/6b148346-fcbc-424a-9f33-775948eaf93c-kube-api-access-mtz2p\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.603630 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.603601 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b148346-fcbc-424a-9f33-775948eaf93c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.604080 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.604051 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b148346-fcbc-424a-9f33-775948eaf93c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.605890 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.605868 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6b148346-fcbc-424a-9f33-775948eaf93c-config-out\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.606292 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.606126 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.606292 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.606142 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.606419 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.606306 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-web-config\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.606419 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.606323 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6b148346-fcbc-424a-9f33-775948eaf93c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.606419 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.606369 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.606579 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.606562 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-config-volume\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.606754 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.606733 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.607212 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.607190 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6b148346-fcbc-424a-9f33-775948eaf93c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.611689 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.611670 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtz2p\" (UniqueName: \"kubernetes.io/projected/6b148346-fcbc-424a-9f33-775948eaf93c-kube-api-access-mtz2p\") pod \"alertmanager-main-0\" (UID: \"6b148346-fcbc-424a-9f33-775948eaf93c\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.724293 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.724264 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 16:25:17.863208 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:17.863121 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 16:25:17.865166 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:17.865137 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b148346_fcbc_424a_9f33_775948eaf93c.slice/crio-306cb5c3b53a49ce06446eae526f2cdd04bdd87e3c83d94b18c0bfd8f58fe7a3 WatchSource:0}: Error finding container 306cb5c3b53a49ce06446eae526f2cdd04bdd87e3c83d94b18c0bfd8f58fe7a3: Status 404 returned error can't find the container with id 306cb5c3b53a49ce06446eae526f2cdd04bdd87e3c83d94b18c0bfd8f58fe7a3 Apr 16 16:25:18.254632 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:18.254592 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6b148346-fcbc-424a-9f33-775948eaf93c","Type":"ContainerStarted","Data":"306cb5c3b53a49ce06446eae526f2cdd04bdd87e3c83d94b18c0bfd8f58fe7a3"} Apr 16 16:25:18.255901 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:18.255874 2572 generic.go:358] "Generic (PLEG): container finished" podID="c646ba1a-2ec3-42be-ad8a-73615ad5d640" containerID="6358fd72cc246d987ebdbfb1ff59308a342707bc8655e759f582213304b287d5" exitCode=0 Apr 16 16:25:18.256031 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:18.255930 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zkjg4" event={"ID":"c646ba1a-2ec3-42be-ad8a-73615ad5d640","Type":"ContainerDied","Data":"6358fd72cc246d987ebdbfb1ff59308a342707bc8655e759f582213304b287d5"} Apr 16 16:25:19.259567 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.259530 2572 generic.go:358] "Generic (PLEG): container finished" podID="6b148346-fcbc-424a-9f33-775948eaf93c" containerID="08368ff8541e38ace511ec8a50d34cf9a3740599437749974d0bd798ef34e6d6" exitCode=0 Apr 16 16:25:19.260001 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.259616 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6b148346-fcbc-424a-9f33-775948eaf93c","Type":"ContainerDied","Data":"08368ff8541e38ace511ec8a50d34cf9a3740599437749974d0bd798ef34e6d6"} Apr 16 16:25:19.261630 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.261609 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zkjg4" event={"ID":"c646ba1a-2ec3-42be-ad8a-73615ad5d640","Type":"ContainerStarted","Data":"eb773d3d0f8ade55148fb30be487715628e5ae1ea08736c71c36af235d7f5884"} Apr 16 16:25:19.261717 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.261640 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zkjg4" event={"ID":"c646ba1a-2ec3-42be-ad8a-73615ad5d640","Type":"ContainerStarted","Data":"4e8a26a60722ee182959dbe13bb1b88dd1ef4bf527b05b2a154b0192911676ed"} Apr 16 16:25:19.349796 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.349744 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-zkjg4" podStartSLOduration=2.416506749 podStartE2EDuration="3.349726698s" podCreationTimestamp="2026-04-16 16:25:16 +0000 UTC" firstStartedPulling="2026-04-16 16:25:16.716995634 +0000 UTC m=+73.495842503" lastFinishedPulling="2026-04-16 16:25:17.650215569 +0000 UTC m=+74.429062452" observedRunningTime="2026-04-16 16:25:19.347197674 +0000 UTC m=+76.126044588" watchObservedRunningTime="2026-04-16 16:25:19.349726698 +0000 UTC m=+76.128573588" Apr 16 16:25:19.432203 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.432168 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-84fb6c774c-m7wgx"] Apr 16 16:25:19.435627 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.435611 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.438785 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.438764 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-tls\"" Apr 16 16:25:19.438980 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.438969 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-grpc-tls-farfnblgnl8og\"" Apr 16 16:25:19.439258 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.439220 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-web\"" Apr 16 16:25:19.439349 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.439231 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy\"" Apr 16 16:25:19.439349 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.439233 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-metrics\"" Apr 16 16:25:19.439515 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.439501 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-rules\"" Apr 16 16:25:19.439560 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.439532 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-dockercfg-5ppwr\"" Apr 16 16:25:19.451968 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.451945 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-84fb6c774c-m7wgx"] Apr 16 16:25:19.518609 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.518530 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.518609 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.518563 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.518609 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.518598 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.518807 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.518674 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-tls\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.518807 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.518704 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.518807 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.518728 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0391a4f3-73de-49c9-9bec-43a34ad227ad-metrics-client-ca\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.518807 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.518759 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-grpc-tls\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.518807 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.518786 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5shh5\" (UniqueName: \"kubernetes.io/projected/0391a4f3-73de-49c9-9bec-43a34ad227ad-kube-api-access-5shh5\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.619811 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.619776 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5shh5\" (UniqueName: \"kubernetes.io/projected/0391a4f3-73de-49c9-9bec-43a34ad227ad-kube-api-access-5shh5\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.620022 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.619831 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.620022 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.619848 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.620022 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.619890 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.620022 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.619913 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-tls\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.620022 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.619937 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.620022 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.619967 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0391a4f3-73de-49c9-9bec-43a34ad227ad-metrics-client-ca\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.620022 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.619992 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-grpc-tls\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.620782 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.620749 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0391a4f3-73de-49c9-9bec-43a34ad227ad-metrics-client-ca\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.622702 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.622652 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.622702 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.622652 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.622917 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.622895 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-tls\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.622976 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.622951 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.623037 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.623013 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.623129 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.623115 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0391a4f3-73de-49c9-9bec-43a34ad227ad-secret-grpc-tls\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.629064 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.629045 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5shh5\" (UniqueName: \"kubernetes.io/projected/0391a4f3-73de-49c9-9bec-43a34ad227ad-kube-api-access-5shh5\") pod \"thanos-querier-84fb6c774c-m7wgx\" (UID: \"0391a4f3-73de-49c9-9bec-43a34ad227ad\") " pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.743949 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.743908 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:19.874455 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:19.874423 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-84fb6c774c-m7wgx"] Apr 16 16:25:19.877187 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:19.877156 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0391a4f3_73de_49c9_9bec_43a34ad227ad.slice/crio-0931931859df8da677d0ea84b12284d9723d42646f483f45825ce4bdaf3a958b WatchSource:0}: Error finding container 0931931859df8da677d0ea84b12284d9723d42646f483f45825ce4bdaf3a958b: Status 404 returned error can't find the container with id 0931931859df8da677d0ea84b12284d9723d42646f483f45825ce4bdaf3a958b Apr 16 16:25:20.266283 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:20.266226 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" event={"ID":"0391a4f3-73de-49c9-9bec-43a34ad227ad","Type":"ContainerStarted","Data":"0931931859df8da677d0ea84b12284d9723d42646f483f45825ce4bdaf3a958b"} Apr 16 16:25:20.902782 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:20.902742 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6b58844877-ctvc6"] Apr 16 16:25:20.906025 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:20.906004 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:20.910218 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:20.910196 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kubelet-serving-ca-bundle\"" Apr 16 16:25:20.910501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:20.910481 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-client-certs\"" Apr 16 16:25:20.911411 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:20.911386 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-dockercfg-rkn75\"" Apr 16 16:25:20.911535 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:20.911486 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-server-audit-profiles\"" Apr 16 16:25:20.911605 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:20.911539 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-tls\"" Apr 16 16:25:20.911784 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:20.911766 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-4qoq4q6j3fpfd\"" Apr 16 16:25:20.919762 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:20.919740 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6b58844877-ctvc6"] Apr 16 16:25:21.031683 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.031644 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-metrics-server-audit-profiles\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.031856 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.031708 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-client-ca-bundle\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.031856 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.031801 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-audit-log\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.031856 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.031837 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd54\" (UniqueName: \"kubernetes.io/projected/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-kube-api-access-mqd54\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.032010 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.031875 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-secret-metrics-server-client-certs\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.032010 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.031914 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-secret-metrics-server-tls\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.032010 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.031940 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.038538 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.038512 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf"] Apr 16 16:25:21.041794 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.041776 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf" Apr 16 16:25:21.045755 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.045734 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"default-dockercfg-pmqbg\"" Apr 16 16:25:21.045869 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.045775 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"monitoring-plugin-cert\"" Apr 16 16:25:21.056500 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.056473 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf"] Apr 16 16:25:21.132685 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.132662 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-audit-log\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.132810 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.132698 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mqd54\" (UniqueName: \"kubernetes.io/projected/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-kube-api-access-mqd54\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.132810 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.132723 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-secret-metrics-server-client-certs\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.132810 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.132762 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-secret-metrics-server-tls\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.132810 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.132787 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.133015 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.132971 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-metrics-server-audit-profiles\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.133088 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.133016 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/1337a924-e282-4dfc-81f7-6bc7a3e4f272-monitoring-plugin-cert\") pod \"monitoring-plugin-5876b4bbc7-jtfcf\" (UID: \"1337a924-e282-4dfc-81f7-6bc7a3e4f272\") " pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf" Apr 16 16:25:21.133088 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.133062 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-audit-log\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.133088 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.133063 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-client-ca-bundle\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.134313 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.134173 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.134755 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.134731 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-metrics-server-audit-profiles\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.135472 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.135451 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-secret-metrics-server-client-certs\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.135951 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.135934 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-secret-metrics-server-tls\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.136460 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.136443 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-client-ca-bundle\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.142062 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.142039 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqd54\" (UniqueName: \"kubernetes.io/projected/75c7d8fa-2cc7-4063-97e0-e8029c17e6f8-kube-api-access-mqd54\") pod \"metrics-server-6b58844877-ctvc6\" (UID: \"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8\") " pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.216971 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.216945 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:21.233747 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.233717 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/1337a924-e282-4dfc-81f7-6bc7a3e4f272-monitoring-plugin-cert\") pod \"monitoring-plugin-5876b4bbc7-jtfcf\" (UID: \"1337a924-e282-4dfc-81f7-6bc7a3e4f272\") " pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf" Apr 16 16:25:21.236215 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.236147 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/1337a924-e282-4dfc-81f7-6bc7a3e4f272-monitoring-plugin-cert\") pod \"monitoring-plugin-5876b4bbc7-jtfcf\" (UID: \"1337a924-e282-4dfc-81f7-6bc7a3e4f272\") " pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf" Apr 16 16:25:21.272894 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.272813 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6b148346-fcbc-424a-9f33-775948eaf93c","Type":"ContainerStarted","Data":"1d23ced370341e24be71d261735fe889fa8edcb5169c0ef0d739c47d3256b3b3"} Apr 16 16:25:21.272894 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.272862 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6b148346-fcbc-424a-9f33-775948eaf93c","Type":"ContainerStarted","Data":"73061322897b6e036892f138e96cd48ccaabe927a8c91db6230f19e6e43d025d"} Apr 16 16:25:21.352341 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.352312 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf" Apr 16 16:25:21.373054 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.373006 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6b58844877-ctvc6"] Apr 16 16:25:21.374216 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:21.374175 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75c7d8fa_2cc7_4063_97e0_e8029c17e6f8.slice/crio-f42156724d93c315ddf65b875d88f7bcf5111ef5978af1d9f69b85b57d4a6b8e WatchSource:0}: Error finding container f42156724d93c315ddf65b875d88f7bcf5111ef5978af1d9f69b85b57d4a6b8e: Status 404 returned error can't find the container with id f42156724d93c315ddf65b875d88f7bcf5111ef5978af1d9f69b85b57d4a6b8e Apr 16 16:25:21.486024 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.485989 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf"] Apr 16 16:25:21.766831 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.766775 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-7d7bd87b54-m7m26"] Apr 16 16:25:21.771492 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.771469 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:21.775791 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.775767 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-tls\"" Apr 16 16:25:21.775931 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.775858 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-client-serving-certs-ca-bundle\"" Apr 16 16:25:21.776000 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.775935 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-kube-rbac-proxy-config\"" Apr 16 16:25:21.776191 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.776170 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client\"" Apr 16 16:25:21.776590 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.776570 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"federate-client-certs\"" Apr 16 16:25:21.776672 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.776605 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-dockercfg-8tw27\"" Apr 16 16:25:21.781853 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.781835 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-trusted-ca-bundle-8i12ta5c71j38\"" Apr 16 16:25:21.792375 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.792346 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7d7bd87b54-m7m26"] Apr 16 16:25:21.903559 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:21.903526 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1337a924_e282_4dfc_81f7_6bc7a3e4f272.slice/crio-2932f959f30a94c775cb764c858eff1f6e3b4dd5e1537f9c3f14b41477e4bc11 WatchSource:0}: Error finding container 2932f959f30a94c775cb764c858eff1f6e3b4dd5e1537f9c3f14b41477e4bc11: Status 404 returned error can't find the container with id 2932f959f30a94c775cb764c858eff1f6e3b4dd5e1537f9c3f14b41477e4bc11 Apr 16 16:25:21.939222 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.939195 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrz7f\" (UniqueName: \"kubernetes.io/projected/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-kube-api-access-nrz7f\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:21.939381 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.939303 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-serving-certs-ca-bundle\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:21.939452 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.939384 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-federate-client-tls\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:21.939452 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.939424 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-secret-telemeter-client\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:21.939554 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.939467 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-telemeter-client-tls\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:21.939554 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.939499 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-metrics-client-ca\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:21.939554 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.939522 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:21.939677 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:21.939586 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.041516 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.041484 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.041667 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.041549 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrz7f\" (UniqueName: \"kubernetes.io/projected/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-kube-api-access-nrz7f\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.041667 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.041590 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-serving-certs-ca-bundle\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.041667 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.041631 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-federate-client-tls\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.041667 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.041651 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-secret-telemeter-client\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.041863 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.041676 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-telemeter-client-tls\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.041863 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.041696 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-metrics-client-ca\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.041863 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.041712 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.042630 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.042586 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-serving-certs-ca-bundle\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.042730 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.042666 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-metrics-client-ca\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.042771 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.042742 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.044507 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.044482 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-telemeter-client-tls\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.044679 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.044660 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-secret-telemeter-client\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.044742 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.044728 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.044777 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.044734 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-federate-client-tls\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.052147 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.052120 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrz7f\" (UniqueName: \"kubernetes.io/projected/f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50-kube-api-access-nrz7f\") pod \"telemeter-client-7d7bd87b54-m7m26\" (UID: \"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50\") " pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.082496 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.082467 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" Apr 16 16:25:22.219373 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.219337 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7d7bd87b54-m7m26"] Apr 16 16:25:22.224303 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:22.224272 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6c3ce1f_b585_4b92_aafe_f9b9a61f8a50.slice/crio-c484a4be06d3a11062a5b3b1de1f67a189fc518f6a48a10cd9f4b1a7949f3e98 WatchSource:0}: Error finding container c484a4be06d3a11062a5b3b1de1f67a189fc518f6a48a10cd9f4b1a7949f3e98: Status 404 returned error can't find the container with id c484a4be06d3a11062a5b3b1de1f67a189fc518f6a48a10cd9f4b1a7949f3e98 Apr 16 16:25:22.285147 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.285048 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6b148346-fcbc-424a-9f33-775948eaf93c","Type":"ContainerStarted","Data":"581c1c2aa43aed0778ef88c9f2cfa8ad9ab6414347c26d2e86bf585bac456776"} Apr 16 16:25:22.285147 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.285095 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6b148346-fcbc-424a-9f33-775948eaf93c","Type":"ContainerStarted","Data":"5eabbe94b11065817c0dd597bf3ae5a786747925720163591be698c4c8200fa9"} Apr 16 16:25:22.285147 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.285110 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6b148346-fcbc-424a-9f33-775948eaf93c","Type":"ContainerStarted","Data":"77308459dba15a4e79482ab7ca159b83185f2e596bcb9fc5be1af974b5cce3f0"} Apr 16 16:25:22.286376 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.286347 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf" event={"ID":"1337a924-e282-4dfc-81f7-6bc7a3e4f272","Type":"ContainerStarted","Data":"2932f959f30a94c775cb764c858eff1f6e3b4dd5e1537f9c3f14b41477e4bc11"} Apr 16 16:25:22.287454 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.287419 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" event={"ID":"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8","Type":"ContainerStarted","Data":"f42156724d93c315ddf65b875d88f7bcf5111ef5978af1d9f69b85b57d4a6b8e"} Apr 16 16:25:22.288599 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.288563 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" event={"ID":"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50","Type":"ContainerStarted","Data":"c484a4be06d3a11062a5b3b1de1f67a189fc518f6a48a10cd9f4b1a7949f3e98"} Apr 16 16:25:22.290498 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.290473 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" event={"ID":"0391a4f3-73de-49c9-9bec-43a34ad227ad","Type":"ContainerStarted","Data":"12ba26929efee9fe2149ef10eff774f33443dd699cbb7a2c16b9221c383ac7a3"} Apr 16 16:25:22.290624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.290503 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" event={"ID":"0391a4f3-73de-49c9-9bec-43a34ad227ad","Type":"ContainerStarted","Data":"fc6216c5ec764dfbb7139db25cc6ac426fe7f47638fa867aa37d59f61aa7681c"} Apr 16 16:25:22.290624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:22.290517 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" event={"ID":"0391a4f3-73de-49c9-9bec-43a34ad227ad","Type":"ContainerStarted","Data":"21e46af2d946f9b794d78dcff9baf0df7db82f8e3cff742b22f3a66026898687"} Apr 16 16:25:24.300948 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.300893 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6b148346-fcbc-424a-9f33-775948eaf93c","Type":"ContainerStarted","Data":"439c464f283010147f23e7a4ddbc9122e9917bd7742f5a89441bd99743e19a4c"} Apr 16 16:25:24.302710 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.302666 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf" event={"ID":"1337a924-e282-4dfc-81f7-6bc7a3e4f272","Type":"ContainerStarted","Data":"d1d37b3fc14f31a9a526b6d00411b8a69a3859bc4abd9841bfab6281efdab25c"} Apr 16 16:25:24.303043 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.303018 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf" Apr 16 16:25:24.304318 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.304295 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" event={"ID":"75c7d8fa-2cc7-4063-97e0-e8029c17e6f8","Type":"ContainerStarted","Data":"cd4a96d714f14d496924507db284b10cf57794ebe848a8365074be5a1fc3dd62"} Apr 16 16:25:24.307765 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.307741 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" event={"ID":"0391a4f3-73de-49c9-9bec-43a34ad227ad","Type":"ContainerStarted","Data":"fd283108a58476f00d696a4a6129e187f1f773738c4d08d69722d0b2985e40ac"} Apr 16 16:25:24.307884 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.307774 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" event={"ID":"0391a4f3-73de-49c9-9bec-43a34ad227ad","Type":"ContainerStarted","Data":"9d56c7ba56e2f512aa36c7adea388c6ac2eebeddb879f419b369ea5de4e50871"} Apr 16 16:25:24.307884 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.307797 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" event={"ID":"0391a4f3-73de-49c9-9bec-43a34ad227ad","Type":"ContainerStarted","Data":"616c98ac4870820ba3a9b6c03a9343bffb34cc41594b9404ccdc73338926aaa4"} Apr 16 16:25:24.308236 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.308210 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:24.309514 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.309495 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf" Apr 16 16:25:24.344221 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.344171 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=1.493918932 podStartE2EDuration="7.344154115s" podCreationTimestamp="2026-04-16 16:25:17 +0000 UTC" firstStartedPulling="2026-04-16 16:25:17.866933596 +0000 UTC m=+74.645780465" lastFinishedPulling="2026-04-16 16:25:23.717168777 +0000 UTC m=+80.496015648" observedRunningTime="2026-04-16 16:25:24.339170084 +0000 UTC m=+81.118016976" watchObservedRunningTime="2026-04-16 16:25:24.344154115 +0000 UTC m=+81.123001005" Apr 16 16:25:24.417404 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.417345 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" podStartSLOduration=1.579152324 podStartE2EDuration="5.417324167s" podCreationTimestamp="2026-04-16 16:25:19 +0000 UTC" firstStartedPulling="2026-04-16 16:25:19.87917855 +0000 UTC m=+76.658025432" lastFinishedPulling="2026-04-16 16:25:23.717350406 +0000 UTC m=+80.496197275" observedRunningTime="2026-04-16 16:25:24.416866805 +0000 UTC m=+81.195713697" watchObservedRunningTime="2026-04-16 16:25:24.417324167 +0000 UTC m=+81.196171061" Apr 16 16:25:24.417619 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.417587 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-jtfcf" podStartSLOduration=1.602043712 podStartE2EDuration="3.417579405s" podCreationTimestamp="2026-04-16 16:25:21 +0000 UTC" firstStartedPulling="2026-04-16 16:25:21.905383136 +0000 UTC m=+78.684230005" lastFinishedPulling="2026-04-16 16:25:23.720918818 +0000 UTC m=+80.499765698" observedRunningTime="2026-04-16 16:25:24.368180161 +0000 UTC m=+81.147027053" watchObservedRunningTime="2026-04-16 16:25:24.417579405 +0000 UTC m=+81.196426300" Apr 16 16:25:24.452950 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:24.452885 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" podStartSLOduration=2.112211491 podStartE2EDuration="4.452862238s" podCreationTimestamp="2026-04-16 16:25:20 +0000 UTC" firstStartedPulling="2026-04-16 16:25:21.376519406 +0000 UTC m=+78.155366276" lastFinishedPulling="2026-04-16 16:25:23.717170141 +0000 UTC m=+80.496017023" observedRunningTime="2026-04-16 16:25:24.451102719 +0000 UTC m=+81.229949636" watchObservedRunningTime="2026-04-16 16:25:24.452862238 +0000 UTC m=+81.231709138" Apr 16 16:25:25.312837 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:25.312794 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" event={"ID":"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50","Type":"ContainerStarted","Data":"93e6cdfb892a0e93fc3e3758b854a337a0f33505859a3f2fa89ae0b0c673752c"} Apr 16 16:25:25.312837 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:25.312836 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" event={"ID":"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50","Type":"ContainerStarted","Data":"7d661eac598e73ecc7c3d3c15e24a67350d9c2cf495044a7203c8af97781fa97"} Apr 16 16:25:25.312837 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:25.312846 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" event={"ID":"f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50","Type":"ContainerStarted","Data":"1b351726df3177392538978adc8d2a656f8eb23f4e6a6e2286714129f9698900"} Apr 16 16:25:25.342947 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:25.342889 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-7d7bd87b54-m7m26" podStartSLOduration=2.051353365 podStartE2EDuration="4.342869061s" podCreationTimestamp="2026-04-16 16:25:21 +0000 UTC" firstStartedPulling="2026-04-16 16:25:22.226505482 +0000 UTC m=+79.005352351" lastFinishedPulling="2026-04-16 16:25:24.518021174 +0000 UTC m=+81.296868047" observedRunningTime="2026-04-16 16:25:25.342516934 +0000 UTC m=+82.121363817" watchObservedRunningTime="2026-04-16 16:25:25.342869061 +0000 UTC m=+82.121715953" Apr 16 16:25:25.998665 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:25.998631 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-69c4855bf4-86bm7"] Apr 16 16:25:26.001355 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.001338 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.004553 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.004534 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 16 16:25:26.005515 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.005490 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 16 16:25:26.005620 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.005496 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 16 16:25:26.005735 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.005718 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 16 16:25:26.005834 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.005816 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 16 16:25:26.005888 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.005846 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-cpqfd\"" Apr 16 16:25:26.006284 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.006267 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 16 16:25:26.006364 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.006274 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 16 16:25:26.010326 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.010308 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 16 16:25:26.015161 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.015140 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69c4855bf4-86bm7"] Apr 16 16:25:26.181749 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.181710 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-serving-cert\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.181749 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.181750 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkmz2\" (UniqueName: \"kubernetes.io/projected/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-kube-api-access-mkmz2\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.181977 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.181805 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-trusted-ca-bundle\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.181977 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.181873 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-service-ca\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.181977 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.181902 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-oauth-serving-cert\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.181977 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.181966 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-oauth-config\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.182118 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.181995 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-config\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.282848 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.282758 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-serving-cert\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.282848 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.282800 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mkmz2\" (UniqueName: \"kubernetes.io/projected/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-kube-api-access-mkmz2\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.282848 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.282819 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-trusted-ca-bundle\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.282848 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.282840 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-service-ca\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.283137 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.282866 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-oauth-serving-cert\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.283137 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.282943 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-oauth-config\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.283137 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.282988 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-config\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.283642 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.283617 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-service-ca\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.283754 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.283709 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-oauth-serving-cert\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.283809 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.283789 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-config\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.283850 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.283830 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-trusted-ca-bundle\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.285258 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.285231 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-serving-cert\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.285421 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.285399 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-oauth-config\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.296219 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.296190 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkmz2\" (UniqueName: \"kubernetes.io/projected/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-kube-api-access-mkmz2\") pod \"console-69c4855bf4-86bm7\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.311936 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.311915 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:26.455238 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:26.455207 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69c4855bf4-86bm7"] Apr 16 16:25:26.460427 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:25:26.460397 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04bd9dae_a6a6_4047_aa10_31f6c792f4e2.slice/crio-a82a7a6e26ea95939bfdf58410f01555708092d301ae986e1738a02ac696fb87 WatchSource:0}: Error finding container a82a7a6e26ea95939bfdf58410f01555708092d301ae986e1738a02ac696fb87: Status 404 returned error can't find the container with id a82a7a6e26ea95939bfdf58410f01555708092d301ae986e1738a02ac696fb87 Apr 16 16:25:27.320208 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:27.320173 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69c4855bf4-86bm7" event={"ID":"04bd9dae-a6a6-4047-aa10-31f6c792f4e2","Type":"ContainerStarted","Data":"a82a7a6e26ea95939bfdf58410f01555708092d301ae986e1738a02ac696fb87"} Apr 16 16:25:29.328565 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:29.328529 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69c4855bf4-86bm7" event={"ID":"04bd9dae-a6a6-4047-aa10-31f6c792f4e2","Type":"ContainerStarted","Data":"de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c"} Apr 16 16:25:29.351290 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:29.350477 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69c4855bf4-86bm7" podStartSLOduration=1.580106132 podStartE2EDuration="4.350456525s" podCreationTimestamp="2026-04-16 16:25:25 +0000 UTC" firstStartedPulling="2026-04-16 16:25:26.462349485 +0000 UTC m=+83.241196354" lastFinishedPulling="2026-04-16 16:25:29.232699879 +0000 UTC m=+86.011546747" observedRunningTime="2026-04-16 16:25:29.349550757 +0000 UTC m=+86.128397653" watchObservedRunningTime="2026-04-16 16:25:29.350456525 +0000 UTC m=+86.129303417" Apr 16 16:25:30.320675 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:30.320648 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-84fb6c774c-m7wgx" Apr 16 16:25:36.312269 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:36.312203 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:36.312269 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:36.312280 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:36.317271 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:36.317225 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:36.358440 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:36.358415 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:25:41.217995 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:41.217963 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:41.218380 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:41.218004 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:25:44.241621 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:25:44.241586 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-225wk" Apr 16 16:26:01.223185 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:01.223157 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:26:01.226986 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:01.226959 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6b58844877-ctvc6" Apr 16 16:26:51.846777 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.846697 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-84f5c68c96-xgd7v"] Apr 16 16:26:51.848641 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.848622 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.859575 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.859549 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84f5c68c96-xgd7v"] Apr 16 16:26:51.898095 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.898060 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-serving-cert\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.898216 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.898101 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-config\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.898216 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.898153 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-service-ca\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.898216 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.898188 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-trusted-ca-bundle\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.898375 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.898233 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d86m\" (UniqueName: \"kubernetes.io/projected/5c138daf-8559-490a-ba08-2c12c9f3ef23-kube-api-access-6d86m\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.898375 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.898276 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-oauth-config\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.898375 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.898310 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-oauth-serving-cert\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.999093 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.999059 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-oauth-config\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.999311 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.999105 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-oauth-serving-cert\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.999390 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.999340 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-serving-cert\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.999448 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.999401 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-config\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.999448 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.999439 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-service-ca\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.999562 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.999467 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-trusted-ca-bundle\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.999562 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.999544 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6d86m\" (UniqueName: \"kubernetes.io/projected/5c138daf-8559-490a-ba08-2c12c9f3ef23-kube-api-access-6d86m\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:51.999802 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:51.999772 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-oauth-serving-cert\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:52.000125 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.000090 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-service-ca\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:52.000236 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.000222 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-config\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:52.000557 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.000526 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-trusted-ca-bundle\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:52.001535 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.001513 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-oauth-config\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:52.001691 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.001672 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-serving-cert\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:52.008690 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.008666 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d86m\" (UniqueName: \"kubernetes.io/projected/5c138daf-8559-490a-ba08-2c12c9f3ef23-kube-api-access-6d86m\") pod \"console-84f5c68c96-xgd7v\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:52.157328 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.157214 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:26:52.294960 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.294927 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84f5c68c96-xgd7v"] Apr 16 16:26:52.297873 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:26:52.297842 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c138daf_8559_490a_ba08_2c12c9f3ef23.slice/crio-fa1dc4b8ac3a7984c7c881e5120987e448d1aba58438b7ee5c1842479e84a784 WatchSource:0}: Error finding container fa1dc4b8ac3a7984c7c881e5120987e448d1aba58438b7ee5c1842479e84a784: Status 404 returned error can't find the container with id fa1dc4b8ac3a7984c7c881e5120987e448d1aba58438b7ee5c1842479e84a784 Apr 16 16:26:52.573098 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.573060 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84f5c68c96-xgd7v" event={"ID":"5c138daf-8559-490a-ba08-2c12c9f3ef23","Type":"ContainerStarted","Data":"931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908"} Apr 16 16:26:52.573098 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.573100 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84f5c68c96-xgd7v" event={"ID":"5c138daf-8559-490a-ba08-2c12c9f3ef23","Type":"ContainerStarted","Data":"fa1dc4b8ac3a7984c7c881e5120987e448d1aba58438b7ee5c1842479e84a784"} Apr 16 16:26:52.603596 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:26:52.603554 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-84f5c68c96-xgd7v" podStartSLOduration=1.6035402250000002 podStartE2EDuration="1.603540225s" podCreationTimestamp="2026-04-16 16:26:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:26:52.602405076 +0000 UTC m=+169.381251968" watchObservedRunningTime="2026-04-16 16:26:52.603540225 +0000 UTC m=+169.382387094" Apr 16 16:27:02.157410 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:02.157368 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:27:02.157802 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:02.157425 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:27:02.162084 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:02.162062 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:27:02.605676 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:02.605651 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:27:02.656208 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:02.656174 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-69c4855bf4-86bm7"] Apr 16 16:27:27.681086 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.681043 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-69c4855bf4-86bm7" podUID="04bd9dae-a6a6-4047-aa10-31f6c792f4e2" containerName="console" containerID="cri-o://de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c" gracePeriod=15 Apr 16 16:27:27.916281 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.916241 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69c4855bf4-86bm7_04bd9dae-a6a6-4047-aa10-31f6c792f4e2/console/0.log" Apr 16 16:27:27.916399 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.916321 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:27:27.997844 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.997812 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-config\") pod \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " Apr 16 16:27:27.998020 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.997863 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-oauth-config\") pod \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " Apr 16 16:27:27.998020 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.997890 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkmz2\" (UniqueName: \"kubernetes.io/projected/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-kube-api-access-mkmz2\") pod \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " Apr 16 16:27:27.998020 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.997929 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-trusted-ca-bundle\") pod \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " Apr 16 16:27:27.998020 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.997954 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-oauth-serving-cert\") pod \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " Apr 16 16:27:27.998020 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.997984 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-service-ca\") pod \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " Apr 16 16:27:27.998300 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.998048 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-serving-cert\") pod \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\" (UID: \"04bd9dae-a6a6-4047-aa10-31f6c792f4e2\") " Apr 16 16:27:27.998364 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.998212 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-config" (OuterVolumeSpecName: "console-config") pod "04bd9dae-a6a6-4047-aa10-31f6c792f4e2" (UID: "04bd9dae-a6a6-4047-aa10-31f6c792f4e2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:27:27.998421 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.998364 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "04bd9dae-a6a6-4047-aa10-31f6c792f4e2" (UID: "04bd9dae-a6a6-4047-aa10-31f6c792f4e2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:27:27.998479 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.998442 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "04bd9dae-a6a6-4047-aa10-31f6c792f4e2" (UID: "04bd9dae-a6a6-4047-aa10-31f6c792f4e2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:27:27.998479 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:27.998449 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-service-ca" (OuterVolumeSpecName: "service-ca") pod "04bd9dae-a6a6-4047-aa10-31f6c792f4e2" (UID: "04bd9dae-a6a6-4047-aa10-31f6c792f4e2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:27:28.000310 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.000278 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "04bd9dae-a6a6-4047-aa10-31f6c792f4e2" (UID: "04bd9dae-a6a6-4047-aa10-31f6c792f4e2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:27:28.000560 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.000543 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-kube-api-access-mkmz2" (OuterVolumeSpecName: "kube-api-access-mkmz2") pod "04bd9dae-a6a6-4047-aa10-31f6c792f4e2" (UID: "04bd9dae-a6a6-4047-aa10-31f6c792f4e2"). InnerVolumeSpecName "kube-api-access-mkmz2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 16:27:28.000603 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.000576 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "04bd9dae-a6a6-4047-aa10-31f6c792f4e2" (UID: "04bd9dae-a6a6-4047-aa10-31f6c792f4e2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:27:28.098780 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.098744 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mkmz2\" (UniqueName: \"kubernetes.io/projected/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-kube-api-access-mkmz2\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:27:28.098780 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.098771 2572 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-trusted-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:27:28.098780 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.098784 2572 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-oauth-serving-cert\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:27:28.099010 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.098795 2572 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-service-ca\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:27:28.099010 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.098805 2572 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-serving-cert\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:27:28.099010 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.098814 2572 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-config\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:27:28.099010 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.098822 2572 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04bd9dae-a6a6-4047-aa10-31f6c792f4e2-console-oauth-config\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:27:28.671074 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.671042 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69c4855bf4-86bm7_04bd9dae-a6a6-4047-aa10-31f6c792f4e2/console/0.log" Apr 16 16:27:28.671275 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.671083 2572 generic.go:358] "Generic (PLEG): container finished" podID="04bd9dae-a6a6-4047-aa10-31f6c792f4e2" containerID="de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c" exitCode=2 Apr 16 16:27:28.671275 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.671115 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69c4855bf4-86bm7" event={"ID":"04bd9dae-a6a6-4047-aa10-31f6c792f4e2","Type":"ContainerDied","Data":"de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c"} Apr 16 16:27:28.671275 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.671163 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69c4855bf4-86bm7" Apr 16 16:27:28.671275 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.671189 2572 scope.go:117] "RemoveContainer" containerID="de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c" Apr 16 16:27:28.671275 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.671161 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69c4855bf4-86bm7" event={"ID":"04bd9dae-a6a6-4047-aa10-31f6c792f4e2","Type":"ContainerDied","Data":"a82a7a6e26ea95939bfdf58410f01555708092d301ae986e1738a02ac696fb87"} Apr 16 16:27:28.681847 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.679727 2572 scope.go:117] "RemoveContainer" containerID="de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c" Apr 16 16:27:28.682161 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:27:28.681923 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c\": container with ID starting with de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c not found: ID does not exist" containerID="de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c" Apr 16 16:27:28.682161 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.681954 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c"} err="failed to get container status \"de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c\": rpc error: code = NotFound desc = could not find container \"de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c\": container with ID starting with de6e75cde5558ac5c1012bdb1b3080d1eeee550ae2a6e54bb6eff225788a499c not found: ID does not exist" Apr 16 16:27:28.691633 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.691606 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-69c4855bf4-86bm7"] Apr 16 16:27:28.699174 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:28.699150 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-69c4855bf4-86bm7"] Apr 16 16:27:29.908963 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:27:29.908930 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04bd9dae-a6a6-4047-aa10-31f6c792f4e2" path="/var/lib/kubelet/pods/04bd9dae-a6a6-4047-aa10-31f6c792f4e2/volumes" Apr 16 16:29:03.822390 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:29:03.822362 2572 kubelet.go:1628] "Image garbage collection succeeded" Apr 16 16:30:34.586175 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.586140 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-6468586f66-jgj22"] Apr 16 16:30:34.586616 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.586493 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04bd9dae-a6a6-4047-aa10-31f6c792f4e2" containerName="console" Apr 16 16:30:34.586616 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.586505 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bd9dae-a6a6-4047-aa10-31f6c792f4e2" containerName="console" Apr 16 16:30:34.586616 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.586554 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="04bd9dae-a6a6-4047-aa10-31f6c792f4e2" containerName="console" Apr 16 16:30:34.588337 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.588319 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.618066 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.618033 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6468586f66-jgj22"] Apr 16 16:30:34.630281 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.630227 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-oauth-serving-cert\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.630452 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.630293 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-trusted-ca-bundle\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.630452 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.630330 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68a222c3-09f6-4cfd-a36e-bb6380d02276-console-serving-cert\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.630452 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.630428 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-service-ca\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.630571 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.630467 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zl65\" (UniqueName: \"kubernetes.io/projected/68a222c3-09f6-4cfd-a36e-bb6380d02276-kube-api-access-9zl65\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.630571 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.630557 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-console-config\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.630658 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.630586 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68a222c3-09f6-4cfd-a36e-bb6380d02276-console-oauth-config\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.731003 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.730963 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-oauth-serving-cert\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.731003 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.731002 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-trusted-ca-bundle\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.731308 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.731026 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68a222c3-09f6-4cfd-a36e-bb6380d02276-console-serving-cert\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.731308 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.731060 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-service-ca\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.731308 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.731085 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9zl65\" (UniqueName: \"kubernetes.io/projected/68a222c3-09f6-4cfd-a36e-bb6380d02276-kube-api-access-9zl65\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.731308 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.731134 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-console-config\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.731308 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.731152 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68a222c3-09f6-4cfd-a36e-bb6380d02276-console-oauth-config\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.731902 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.731871 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-oauth-serving-cert\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.732095 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.732055 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-console-config\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.732095 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.732084 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-service-ca\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.732297 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.732137 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68a222c3-09f6-4cfd-a36e-bb6380d02276-trusted-ca-bundle\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.733719 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.733697 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68a222c3-09f6-4cfd-a36e-bb6380d02276-console-oauth-config\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.734122 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.734104 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68a222c3-09f6-4cfd-a36e-bb6380d02276-console-serving-cert\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.743050 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.743019 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zl65\" (UniqueName: \"kubernetes.io/projected/68a222c3-09f6-4cfd-a36e-bb6380d02276-kube-api-access-9zl65\") pod \"console-6468586f66-jgj22\" (UID: \"68a222c3-09f6-4cfd-a36e-bb6380d02276\") " pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:34.897175 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:34.897072 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:35.016962 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:35.016905 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6468586f66-jgj22"] Apr 16 16:30:35.019529 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:30:35.019503 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68a222c3_09f6_4cfd_a36e_bb6380d02276.slice/crio-70f940255d7c5fb2906ce5ba10e7b59e728bc3862d22872df12aaae3bde46a4b WatchSource:0}: Error finding container 70f940255d7c5fb2906ce5ba10e7b59e728bc3862d22872df12aaae3bde46a4b: Status 404 returned error can't find the container with id 70f940255d7c5fb2906ce5ba10e7b59e728bc3862d22872df12aaae3bde46a4b Apr 16 16:30:35.021289 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:35.021272 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 16:30:35.188725 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:35.188633 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6468586f66-jgj22" event={"ID":"68a222c3-09f6-4cfd-a36e-bb6380d02276","Type":"ContainerStarted","Data":"25cc97f0de5e9bc0cfa07ae67ddcdc385f84441bee987d86aa6656fb2f7d5207"} Apr 16 16:30:35.188725 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:35.188671 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6468586f66-jgj22" event={"ID":"68a222c3-09f6-4cfd-a36e-bb6380d02276","Type":"ContainerStarted","Data":"70f940255d7c5fb2906ce5ba10e7b59e728bc3862d22872df12aaae3bde46a4b"} Apr 16 16:30:35.209974 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:35.209924 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6468586f66-jgj22" podStartSLOduration=1.209909816 podStartE2EDuration="1.209909816s" podCreationTimestamp="2026-04-16 16:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:30:35.208199887 +0000 UTC m=+391.987046779" watchObservedRunningTime="2026-04-16 16:30:35.209909816 +0000 UTC m=+391.988756707" Apr 16 16:30:44.897764 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:44.897709 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:44.898175 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:44.897872 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:44.902547 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:44.902526 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:45.219170 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:45.219145 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6468586f66-jgj22" Apr 16 16:30:45.266532 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:30:45.266501 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84f5c68c96-xgd7v"] Apr 16 16:31:10.290693 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.290622 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-84f5c68c96-xgd7v" podUID="5c138daf-8559-490a-ba08-2c12c9f3ef23" containerName="console" containerID="cri-o://931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908" gracePeriod=15 Apr 16 16:31:10.530381 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.530349 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84f5c68c96-xgd7v_5c138daf-8559-490a-ba08-2c12c9f3ef23/console/0.log" Apr 16 16:31:10.530533 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.530425 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:31:10.540687 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.540660 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-service-ca\") pod \"5c138daf-8559-490a-ba08-2c12c9f3ef23\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " Apr 16 16:31:10.540884 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.540727 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-trusted-ca-bundle\") pod \"5c138daf-8559-490a-ba08-2c12c9f3ef23\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " Apr 16 16:31:10.540884 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.540751 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-config\") pod \"5c138daf-8559-490a-ba08-2c12c9f3ef23\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " Apr 16 16:31:10.540884 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.540772 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-serving-cert\") pod \"5c138daf-8559-490a-ba08-2c12c9f3ef23\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " Apr 16 16:31:10.540884 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.540811 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-oauth-serving-cert\") pod \"5c138daf-8559-490a-ba08-2c12c9f3ef23\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " Apr 16 16:31:10.541102 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.540941 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-oauth-config\") pod \"5c138daf-8559-490a-ba08-2c12c9f3ef23\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " Apr 16 16:31:10.541102 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.541045 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-service-ca" (OuterVolumeSpecName: "service-ca") pod "5c138daf-8559-490a-ba08-2c12c9f3ef23" (UID: "5c138daf-8559-490a-ba08-2c12c9f3ef23"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:31:10.541197 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.541183 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-config" (OuterVolumeSpecName: "console-config") pod "5c138daf-8559-490a-ba08-2c12c9f3ef23" (UID: "5c138daf-8559-490a-ba08-2c12c9f3ef23"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:31:10.541269 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.541220 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5c138daf-8559-490a-ba08-2c12c9f3ef23" (UID: "5c138daf-8559-490a-ba08-2c12c9f3ef23"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:31:10.541269 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.541227 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5c138daf-8559-490a-ba08-2c12c9f3ef23" (UID: "5c138daf-8559-490a-ba08-2c12c9f3ef23"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:31:10.541375 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.541237 2572 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-service-ca\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:31:10.543014 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.542984 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5c138daf-8559-490a-ba08-2c12c9f3ef23" (UID: "5c138daf-8559-490a-ba08-2c12c9f3ef23"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:31:10.543095 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.543015 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5c138daf-8559-490a-ba08-2c12c9f3ef23" (UID: "5c138daf-8559-490a-ba08-2c12c9f3ef23"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:31:10.641579 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.641549 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d86m\" (UniqueName: \"kubernetes.io/projected/5c138daf-8559-490a-ba08-2c12c9f3ef23-kube-api-access-6d86m\") pod \"5c138daf-8559-490a-ba08-2c12c9f3ef23\" (UID: \"5c138daf-8559-490a-ba08-2c12c9f3ef23\") " Apr 16 16:31:10.641758 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.641724 2572 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-trusted-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:31:10.641758 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.641735 2572 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-config\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:31:10.641758 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.641744 2572 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-serving-cert\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:31:10.641758 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.641753 2572 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c138daf-8559-490a-ba08-2c12c9f3ef23-oauth-serving-cert\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:31:10.641891 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.641762 2572 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c138daf-8559-490a-ba08-2c12c9f3ef23-console-oauth-config\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:31:10.643600 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.643565 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c138daf-8559-490a-ba08-2c12c9f3ef23-kube-api-access-6d86m" (OuterVolumeSpecName: "kube-api-access-6d86m") pod "5c138daf-8559-490a-ba08-2c12c9f3ef23" (UID: "5c138daf-8559-490a-ba08-2c12c9f3ef23"). InnerVolumeSpecName "kube-api-access-6d86m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 16:31:10.742778 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:10.742738 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6d86m\" (UniqueName: \"kubernetes.io/projected/5c138daf-8559-490a-ba08-2c12c9f3ef23-kube-api-access-6d86m\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:31:11.290288 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.290263 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84f5c68c96-xgd7v_5c138daf-8559-490a-ba08-2c12c9f3ef23/console/0.log" Apr 16 16:31:11.290493 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.290303 2572 generic.go:358] "Generic (PLEG): container finished" podID="5c138daf-8559-490a-ba08-2c12c9f3ef23" containerID="931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908" exitCode=2 Apr 16 16:31:11.290493 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.290386 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84f5c68c96-xgd7v" Apr 16 16:31:11.290493 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.290398 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84f5c68c96-xgd7v" event={"ID":"5c138daf-8559-490a-ba08-2c12c9f3ef23","Type":"ContainerDied","Data":"931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908"} Apr 16 16:31:11.290493 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.290432 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84f5c68c96-xgd7v" event={"ID":"5c138daf-8559-490a-ba08-2c12c9f3ef23","Type":"ContainerDied","Data":"fa1dc4b8ac3a7984c7c881e5120987e448d1aba58438b7ee5c1842479e84a784"} Apr 16 16:31:11.290493 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.290446 2572 scope.go:117] "RemoveContainer" containerID="931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908" Apr 16 16:31:11.299164 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.299138 2572 scope.go:117] "RemoveContainer" containerID="931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908" Apr 16 16:31:11.299526 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:31:11.299435 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908\": container with ID starting with 931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908 not found: ID does not exist" containerID="931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908" Apr 16 16:31:11.299526 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.299468 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908"} err="failed to get container status \"931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908\": rpc error: code = NotFound desc = could not find container \"931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908\": container with ID starting with 931a54229cead3bc51679edb6e6b0990de22ee0b8e118086607bd9ff6c3f1908 not found: ID does not exist" Apr 16 16:31:11.311730 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.311707 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84f5c68c96-xgd7v"] Apr 16 16:31:11.315638 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.315617 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-84f5c68c96-xgd7v"] Apr 16 16:31:11.909003 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:31:11.908966 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c138daf-8559-490a-ba08-2c12c9f3ef23" path="/var/lib/kubelet/pods/5c138daf-8559-490a-ba08-2c12c9f3ef23/volumes" Apr 16 16:38:35.672582 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.672537 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf"] Apr 16 16:38:35.673194 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.672968 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5c138daf-8559-490a-ba08-2c12c9f3ef23" containerName="console" Apr 16 16:38:35.673194 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.672985 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c138daf-8559-490a-ba08-2c12c9f3ef23" containerName="console" Apr 16 16:38:35.673194 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.673081 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="5c138daf-8559-490a-ba08-2c12c9f3ef23" containerName="console" Apr 16 16:38:35.676167 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.676145 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:35.679008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.678989 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 16 16:38:35.680101 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.680085 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 16 16:38:35.680191 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.680123 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-hw4gg\"" Apr 16 16:38:35.696801 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.696772 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf"] Apr 16 16:38:35.715946 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.715915 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjfsp\" (UniqueName: \"kubernetes.io/projected/0299ee41-1aaf-4008-9a79-7d14f04ed854-kube-api-access-mjfsp\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:35.716118 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.715961 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:35.716118 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.716064 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:35.817272 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.817215 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mjfsp\" (UniqueName: \"kubernetes.io/projected/0299ee41-1aaf-4008-9a79-7d14f04ed854-kube-api-access-mjfsp\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:35.817477 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.817290 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:35.817477 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.817334 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:35.817681 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.817662 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:35.817736 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.817686 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:35.826898 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.826868 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjfsp\" (UniqueName: \"kubernetes.io/projected/0299ee41-1aaf-4008-9a79-7d14f04ed854-kube-api-access-mjfsp\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:35.985038 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:35.985006 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:36.106305 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:36.106277 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf"] Apr 16 16:38:36.108807 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:38:36.108777 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0299ee41_1aaf_4008_9a79_7d14f04ed854.slice/crio-2e450fc774d01530d1fb47ecd41060f7f0341da2d867cce67dac06a601914ee8 WatchSource:0}: Error finding container 2e450fc774d01530d1fb47ecd41060f7f0341da2d867cce67dac06a601914ee8: Status 404 returned error can't find the container with id 2e450fc774d01530d1fb47ecd41060f7f0341da2d867cce67dac06a601914ee8 Apr 16 16:38:36.110592 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:36.110570 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 16:38:36.517907 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:36.517865 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" event={"ID":"0299ee41-1aaf-4008-9a79-7d14f04ed854","Type":"ContainerStarted","Data":"2e450fc774d01530d1fb47ecd41060f7f0341da2d867cce67dac06a601914ee8"} Apr 16 16:38:41.535023 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:41.534986 2572 generic.go:358] "Generic (PLEG): container finished" podID="0299ee41-1aaf-4008-9a79-7d14f04ed854" containerID="f0c40fcdb1a8f044391e4f96e099e7e8f4d6394126db752062eb490e16aa312d" exitCode=0 Apr 16 16:38:41.535430 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:41.535037 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" event={"ID":"0299ee41-1aaf-4008-9a79-7d14f04ed854","Type":"ContainerDied","Data":"f0c40fcdb1a8f044391e4f96e099e7e8f4d6394126db752062eb490e16aa312d"} Apr 16 16:38:43.544008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:43.543975 2572 generic.go:358] "Generic (PLEG): container finished" podID="0299ee41-1aaf-4008-9a79-7d14f04ed854" containerID="18ec6a0ff0220a694c5669bf8a8d875e7d13cabe58f7406fa4a276f5165fdb22" exitCode=0 Apr 16 16:38:43.544382 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:43.544027 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" event={"ID":"0299ee41-1aaf-4008-9a79-7d14f04ed854","Type":"ContainerDied","Data":"18ec6a0ff0220a694c5669bf8a8d875e7d13cabe58f7406fa4a276f5165fdb22"} Apr 16 16:38:49.563178 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:49.563145 2572 generic.go:358] "Generic (PLEG): container finished" podID="0299ee41-1aaf-4008-9a79-7d14f04ed854" containerID="58bcf5459f0bead18cf2af0597db252d97e10a4e043c280accf9167b12a3048c" exitCode=0 Apr 16 16:38:49.563600 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:49.563227 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" event={"ID":"0299ee41-1aaf-4008-9a79-7d14f04ed854","Type":"ContainerDied","Data":"58bcf5459f0bead18cf2af0597db252d97e10a4e043c280accf9167b12a3048c"} Apr 16 16:38:50.685256 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:50.685217 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:50.746059 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:50.746022 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-util\") pod \"0299ee41-1aaf-4008-9a79-7d14f04ed854\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " Apr 16 16:38:50.746225 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:50.746132 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-bundle\") pod \"0299ee41-1aaf-4008-9a79-7d14f04ed854\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " Apr 16 16:38:50.746225 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:50.746193 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjfsp\" (UniqueName: \"kubernetes.io/projected/0299ee41-1aaf-4008-9a79-7d14f04ed854-kube-api-access-mjfsp\") pod \"0299ee41-1aaf-4008-9a79-7d14f04ed854\" (UID: \"0299ee41-1aaf-4008-9a79-7d14f04ed854\") " Apr 16 16:38:50.746699 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:50.746675 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-bundle" (OuterVolumeSpecName: "bundle") pod "0299ee41-1aaf-4008-9a79-7d14f04ed854" (UID: "0299ee41-1aaf-4008-9a79-7d14f04ed854"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 16:38:50.748400 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:50.748368 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0299ee41-1aaf-4008-9a79-7d14f04ed854-kube-api-access-mjfsp" (OuterVolumeSpecName: "kube-api-access-mjfsp") pod "0299ee41-1aaf-4008-9a79-7d14f04ed854" (UID: "0299ee41-1aaf-4008-9a79-7d14f04ed854"). InnerVolumeSpecName "kube-api-access-mjfsp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 16:38:50.750305 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:50.750285 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-util" (OuterVolumeSpecName: "util") pod "0299ee41-1aaf-4008-9a79-7d14f04ed854" (UID: "0299ee41-1aaf-4008-9a79-7d14f04ed854"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 16:38:50.847069 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:50.846979 2572 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-util\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:38:50.847069 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:50.847009 2572 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0299ee41-1aaf-4008-9a79-7d14f04ed854-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:38:50.847069 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:50.847018 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjfsp\" (UniqueName: \"kubernetes.io/projected/0299ee41-1aaf-4008-9a79-7d14f04ed854-kube-api-access-mjfsp\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:38:51.570535 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:51.570449 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" event={"ID":"0299ee41-1aaf-4008-9a79-7d14f04ed854","Type":"ContainerDied","Data":"2e450fc774d01530d1fb47ecd41060f7f0341da2d867cce67dac06a601914ee8"} Apr 16 16:38:51.570535 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:51.570486 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e450fc774d01530d1fb47ecd41060f7f0341da2d867cce67dac06a601914ee8" Apr 16 16:38:51.570535 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:51.570495 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ckjrtf" Apr 16 16:38:57.613235 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.613201 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v"] Apr 16 16:38:57.613682 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.613552 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0299ee41-1aaf-4008-9a79-7d14f04ed854" containerName="util" Apr 16 16:38:57.613682 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.613566 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="0299ee41-1aaf-4008-9a79-7d14f04ed854" containerName="util" Apr 16 16:38:57.613682 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.613588 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0299ee41-1aaf-4008-9a79-7d14f04ed854" containerName="extract" Apr 16 16:38:57.613682 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.613594 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="0299ee41-1aaf-4008-9a79-7d14f04ed854" containerName="extract" Apr 16 16:38:57.613682 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.613604 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0299ee41-1aaf-4008-9a79-7d14f04ed854" containerName="pull" Apr 16 16:38:57.613682 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.613609 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="0299ee41-1aaf-4008-9a79-7d14f04ed854" containerName="pull" Apr 16 16:38:57.613682 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.613664 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="0299ee41-1aaf-4008-9a79-7d14f04ed854" containerName="extract" Apr 16 16:38:57.616344 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.616327 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" Apr 16 16:38:57.619360 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.619336 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"custom-metrics-autoscaler-operator-dockercfg-m2nl2\"" Apr 16 16:38:57.619496 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.619376 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 16 16:38:57.619568 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.619501 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 16 16:38:57.619756 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.619740 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 16 16:38:57.628262 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.628220 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v"] Apr 16 16:38:57.707240 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.707200 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/9de71a13-5928-4dc0-854a-84fd4c1fb50f-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v\" (UID: \"9de71a13-5928-4dc0-854a-84fd4c1fb50f\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" Apr 16 16:38:57.707436 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.707268 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mdbj\" (UniqueName: \"kubernetes.io/projected/9de71a13-5928-4dc0-854a-84fd4c1fb50f-kube-api-access-6mdbj\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v\" (UID: \"9de71a13-5928-4dc0-854a-84fd4c1fb50f\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" Apr 16 16:38:57.807713 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.807669 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/9de71a13-5928-4dc0-854a-84fd4c1fb50f-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v\" (UID: \"9de71a13-5928-4dc0-854a-84fd4c1fb50f\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" Apr 16 16:38:57.807713 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.807712 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6mdbj\" (UniqueName: \"kubernetes.io/projected/9de71a13-5928-4dc0-854a-84fd4c1fb50f-kube-api-access-6mdbj\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v\" (UID: \"9de71a13-5928-4dc0-854a-84fd4c1fb50f\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" Apr 16 16:38:57.810115 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.810085 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/9de71a13-5928-4dc0-854a-84fd4c1fb50f-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v\" (UID: \"9de71a13-5928-4dc0-854a-84fd4c1fb50f\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" Apr 16 16:38:57.817404 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.817382 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mdbj\" (UniqueName: \"kubernetes.io/projected/9de71a13-5928-4dc0-854a-84fd4c1fb50f-kube-api-access-6mdbj\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v\" (UID: \"9de71a13-5928-4dc0-854a-84fd4c1fb50f\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" Apr 16 16:38:57.926646 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:57.926561 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" Apr 16 16:38:58.050598 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:58.050570 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v"] Apr 16 16:38:58.053346 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:38:58.053315 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9de71a13_5928_4dc0_854a_84fd4c1fb50f.slice/crio-46f6670dac8a75d863ac6980fbf36cdcf9ab60d011af246d384f1a6e4972ca3b WatchSource:0}: Error finding container 46f6670dac8a75d863ac6980fbf36cdcf9ab60d011af246d384f1a6e4972ca3b: Status 404 returned error can't find the container with id 46f6670dac8a75d863ac6980fbf36cdcf9ab60d011af246d384f1a6e4972ca3b Apr 16 16:38:58.594401 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:38:58.594365 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" event={"ID":"9de71a13-5928-4dc0-854a-84fd4c1fb50f","Type":"ContainerStarted","Data":"46f6670dac8a75d863ac6980fbf36cdcf9ab60d011af246d384f1a6e4972ca3b"} Apr 16 16:39:01.605236 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.605198 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" event={"ID":"9de71a13-5928-4dc0-854a-84fd4c1fb50f","Type":"ContainerStarted","Data":"a588c1044e7bce3ef88248e3ef563f701ccc1b5d724e83ea4f0e9de5d41bf9dc"} Apr 16 16:39:01.605641 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.605305 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" Apr 16 16:39:01.629230 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.629179 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" podStartSLOduration=1.493176503 podStartE2EDuration="4.629165578s" podCreationTimestamp="2026-04-16 16:38:57 +0000 UTC" firstStartedPulling="2026-04-16 16:38:58.054993774 +0000 UTC m=+894.833840643" lastFinishedPulling="2026-04-16 16:39:01.19098285 +0000 UTC m=+897.969829718" observedRunningTime="2026-04-16 16:39:01.627172522 +0000 UTC m=+898.406019424" watchObservedRunningTime="2026-04-16 16:39:01.629165578 +0000 UTC m=+898.408012468" Apr 16 16:39:01.885410 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.885323 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-kdnsr"] Apr 16 16:39:01.888583 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.888559 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:01.891803 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.891778 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-certs\"" Apr 16 16:39:01.892401 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.892379 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 16 16:39:01.892539 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.892402 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-vsfpr\"" Apr 16 16:39:01.902099 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.902076 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-kdnsr"] Apr 16 16:39:01.943534 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.943489 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/5f84ec99-46fd-4625-93d6-f57df9101d43-cabundle0\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:01.943729 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.943636 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:01.943729 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:01.943670 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmxs\" (UniqueName: \"kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-kube-api-access-djmxs\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:02.044696 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.044651 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:02.044870 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.044736 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-djmxs\" (UniqueName: \"kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-kube-api-access-djmxs\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:02.044870 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.044781 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/5f84ec99-46fd-4625-93d6-f57df9101d43-cabundle0\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:02.045011 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.044957 2572 projected.go:264] Couldn't get secret openshift-keda/keda-operator-certs: secret "keda-operator-certs" not found Apr 16 16:39:02.045011 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.044982 2572 secret.go:281] references non-existent secret key: ca.crt Apr 16 16:39:02.045011 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.044993 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 16:39:02.045011 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.045010 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-kdnsr: [secret "keda-operator-certs" not found, references non-existent secret key: ca.crt] Apr 16 16:39:02.045202 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.045162 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates podName:5f84ec99-46fd-4625-93d6-f57df9101d43 nodeName:}" failed. No retries permitted until 2026-04-16 16:39:02.545139036 +0000 UTC m=+899.323985910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates") pod "keda-operator-ffbb595cb-kdnsr" (UID: "5f84ec99-46fd-4625-93d6-f57df9101d43") : [secret "keda-operator-certs" not found, references non-existent secret key: ca.crt] Apr 16 16:39:02.045665 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.045643 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/5f84ec99-46fd-4625-93d6-f57df9101d43-cabundle0\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:02.071512 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.071486 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-djmxs\" (UniqueName: \"kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-kube-api-access-djmxs\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:02.234741 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.234705 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg"] Apr 16 16:39:02.243172 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.243141 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:02.246225 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.246199 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 16 16:39:02.249110 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.249072 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg"] Apr 16 16:39:02.346539 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.346498 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:02.346844 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.346822 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-787ct\" (UniqueName: \"kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-kube-api-access-787ct\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:02.346967 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.346953 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:02.447993 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.447959 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:02.448186 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.448066 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:02.448186 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.448116 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-787ct\" (UniqueName: \"kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-kube-api-access-787ct\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:02.448313 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.448226 2572 secret.go:281] references non-existent secret key: tls.crt Apr 16 16:39:02.448313 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.448270 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 16:39:02.448313 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.448296 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg: references non-existent secret key: tls.crt Apr 16 16:39:02.448454 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.448376 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates podName:61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c nodeName:}" failed. No retries permitted until 2026-04-16 16:39:02.948356456 +0000 UTC m=+899.727203345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates") pod "keda-metrics-apiserver-7c9f485588-b2jpg" (UID: "61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c") : references non-existent secret key: tls.crt Apr 16 16:39:02.448454 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.448417 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:02.458360 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.458336 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-787ct\" (UniqueName: \"kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-kube-api-access-787ct\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:02.486219 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.486140 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-s74x8"] Apr 16 16:39:02.489400 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.489384 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:02.492200 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.492175 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 16 16:39:02.505081 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.505057 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-s74x8"] Apr 16 16:39:02.549061 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.549025 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbjzx\" (UniqueName: \"kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-kube-api-access-gbjzx\") pod \"keda-admission-cf49989db-s74x8\" (UID: \"87abade3-8e54-48aa-8a93-e33366334e64\") " pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:02.549233 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.549071 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-certificates\") pod \"keda-admission-cf49989db-s74x8\" (UID: \"87abade3-8e54-48aa-8a93-e33366334e64\") " pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:02.549233 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.549165 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:02.549333 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.549286 2572 secret.go:281] references non-existent secret key: ca.crt Apr 16 16:39:02.549333 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.549299 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 16:39:02.549333 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.549307 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-kdnsr: references non-existent secret key: ca.crt Apr 16 16:39:02.549424 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.549356 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates podName:5f84ec99-46fd-4625-93d6-f57df9101d43 nodeName:}" failed. No retries permitted until 2026-04-16 16:39:03.549337421 +0000 UTC m=+900.328184295 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates") pod "keda-operator-ffbb595cb-kdnsr" (UID: "5f84ec99-46fd-4625-93d6-f57df9101d43") : references non-existent secret key: ca.crt Apr 16 16:39:02.650105 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.650070 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbjzx\" (UniqueName: \"kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-kube-api-access-gbjzx\") pod \"keda-admission-cf49989db-s74x8\" (UID: \"87abade3-8e54-48aa-8a93-e33366334e64\") " pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:02.650105 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.650110 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-certificates\") pod \"keda-admission-cf49989db-s74x8\" (UID: \"87abade3-8e54-48aa-8a93-e33366334e64\") " pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:02.650534 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.650242 2572 projected.go:264] Couldn't get secret openshift-keda/keda-admission-webhooks-certs: secret "keda-admission-webhooks-certs" not found Apr 16 16:39:02.650534 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.650278 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-admission-cf49989db-s74x8: secret "keda-admission-webhooks-certs" not found Apr 16 16:39:02.650534 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.650334 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-certificates podName:87abade3-8e54-48aa-8a93-e33366334e64 nodeName:}" failed. No retries permitted until 2026-04-16 16:39:03.150315802 +0000 UTC m=+899.929162672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-certificates") pod "keda-admission-cf49989db-s74x8" (UID: "87abade3-8e54-48aa-8a93-e33366334e64") : secret "keda-admission-webhooks-certs" not found Apr 16 16:39:02.659503 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.659476 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbjzx\" (UniqueName: \"kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-kube-api-access-gbjzx\") pod \"keda-admission-cf49989db-s74x8\" (UID: \"87abade3-8e54-48aa-8a93-e33366334e64\") " pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:02.953605 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:02.953565 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:02.953759 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.953714 2572 secret.go:281] references non-existent secret key: tls.crt Apr 16 16:39:02.953759 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.953734 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 16:39:02.953759 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.953752 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg: references non-existent secret key: tls.crt Apr 16 16:39:02.953879 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:02.953806 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates podName:61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c nodeName:}" failed. No retries permitted until 2026-04-16 16:39:03.953791508 +0000 UTC m=+900.732638376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates") pod "keda-metrics-apiserver-7c9f485588-b2jpg" (UID: "61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c") : references non-existent secret key: tls.crt Apr 16 16:39:03.155037 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:03.154994 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-certificates\") pod \"keda-admission-cf49989db-s74x8\" (UID: \"87abade3-8e54-48aa-8a93-e33366334e64\") " pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:03.155209 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.155145 2572 projected.go:264] Couldn't get secret openshift-keda/keda-admission-webhooks-certs: secret "keda-admission-webhooks-certs" not found Apr 16 16:39:03.155209 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.155168 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-admission-cf49989db-s74x8: secret "keda-admission-webhooks-certs" not found Apr 16 16:39:03.155310 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.155226 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-certificates podName:87abade3-8e54-48aa-8a93-e33366334e64 nodeName:}" failed. No retries permitted until 2026-04-16 16:39:04.155209341 +0000 UTC m=+900.934056215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-certificates") pod "keda-admission-cf49989db-s74x8" (UID: "87abade3-8e54-48aa-8a93-e33366334e64") : secret "keda-admission-webhooks-certs" not found Apr 16 16:39:03.558826 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:03.558790 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:03.559002 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.558916 2572 secret.go:281] references non-existent secret key: ca.crt Apr 16 16:39:03.559002 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.558929 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 16:39:03.559002 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.558938 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-kdnsr: references non-existent secret key: ca.crt Apr 16 16:39:03.559002 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.558986 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates podName:5f84ec99-46fd-4625-93d6-f57df9101d43 nodeName:}" failed. No retries permitted until 2026-04-16 16:39:05.558973646 +0000 UTC m=+902.337820515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates") pod "keda-operator-ffbb595cb-kdnsr" (UID: "5f84ec99-46fd-4625-93d6-f57df9101d43") : references non-existent secret key: ca.crt Apr 16 16:39:03.962362 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:03.962329 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:03.962770 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.962473 2572 secret.go:281] references non-existent secret key: tls.crt Apr 16 16:39:03.962770 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.962487 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 16:39:03.962770 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.962504 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg: references non-existent secret key: tls.crt Apr 16 16:39:03.962770 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:39:03.962556 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates podName:61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c nodeName:}" failed. No retries permitted until 2026-04-16 16:39:05.96254194 +0000 UTC m=+902.741388814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates") pod "keda-metrics-apiserver-7c9f485588-b2jpg" (UID: "61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c") : references non-existent secret key: tls.crt Apr 16 16:39:04.164414 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:04.164371 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-certificates\") pod \"keda-admission-cf49989db-s74x8\" (UID: \"87abade3-8e54-48aa-8a93-e33366334e64\") " pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:04.166863 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:04.166837 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/87abade3-8e54-48aa-8a93-e33366334e64-certificates\") pod \"keda-admission-cf49989db-s74x8\" (UID: \"87abade3-8e54-48aa-8a93-e33366334e64\") " pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:04.302062 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:04.301982 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-vsfpr\"" Apr 16 16:39:04.309657 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:04.309632 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:04.440762 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:04.440726 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-s74x8"] Apr 16 16:39:04.444674 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:39:04.444633 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87abade3_8e54_48aa_8a93_e33366334e64.slice/crio-842546ae80e69201ef6d421e343b6eb592c85c7ec0a93ac453d76bc7e3f55cf4 WatchSource:0}: Error finding container 842546ae80e69201ef6d421e343b6eb592c85c7ec0a93ac453d76bc7e3f55cf4: Status 404 returned error can't find the container with id 842546ae80e69201ef6d421e343b6eb592c85c7ec0a93ac453d76bc7e3f55cf4 Apr 16 16:39:04.618927 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:04.618837 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-s74x8" event={"ID":"87abade3-8e54-48aa-8a93-e33366334e64","Type":"ContainerStarted","Data":"842546ae80e69201ef6d421e343b6eb592c85c7ec0a93ac453d76bc7e3f55cf4"} Apr 16 16:39:05.577934 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:05.577877 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:05.580918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:05.580885 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/5f84ec99-46fd-4625-93d6-f57df9101d43-certificates\") pod \"keda-operator-ffbb595cb-kdnsr\" (UID: \"5f84ec99-46fd-4625-93d6-f57df9101d43\") " pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:05.798666 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:05.798638 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:05.922918 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:05.922895 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-kdnsr"] Apr 16 16:39:05.925291 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:39:05.925263 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f84ec99_46fd_4625_93d6_f57df9101d43.slice/crio-f6200b39d157c29effffc0ca169ba0012be468937f0d128c33b2ff2c41a79af9 WatchSource:0}: Error finding container f6200b39d157c29effffc0ca169ba0012be468937f0d128c33b2ff2c41a79af9: Status 404 returned error can't find the container with id f6200b39d157c29effffc0ca169ba0012be468937f0d128c33b2ff2c41a79af9 Apr 16 16:39:05.981746 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:05.981702 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:05.984214 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:05.984191 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b2jpg\" (UID: \"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:06.157413 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:06.157318 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:06.279194 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:06.279171 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg"] Apr 16 16:39:06.280914 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:39:06.280889 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61b2ea7b_e3ee_4cd3_9d27_bfd7b024f79c.slice/crio-bc404ff1b07e2034149a17faec36c2d2bfbfeb4d1475e43cef38df048c8d1938 WatchSource:0}: Error finding container bc404ff1b07e2034149a17faec36c2d2bfbfeb4d1475e43cef38df048c8d1938: Status 404 returned error can't find the container with id bc404ff1b07e2034149a17faec36c2d2bfbfeb4d1475e43cef38df048c8d1938 Apr 16 16:39:06.627134 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:06.627083 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-s74x8" event={"ID":"87abade3-8e54-48aa-8a93-e33366334e64","Type":"ContainerStarted","Data":"c5a5cb02b5c8f029f0daa2e99c7e79dbca0830d8c982106476ee83aa3d207333"} Apr 16 16:39:06.627732 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:06.627175 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:06.628450 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:06.628421 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" event={"ID":"5f84ec99-46fd-4625-93d6-f57df9101d43","Type":"ContainerStarted","Data":"f6200b39d157c29effffc0ca169ba0012be468937f0d128c33b2ff2c41a79af9"} Apr 16 16:39:06.629556 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:06.629535 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" event={"ID":"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c","Type":"ContainerStarted","Data":"bc404ff1b07e2034149a17faec36c2d2bfbfeb4d1475e43cef38df048c8d1938"} Apr 16 16:39:06.646683 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:06.646617 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-s74x8" podStartSLOduration=3.397310889 podStartE2EDuration="4.646599108s" podCreationTimestamp="2026-04-16 16:39:02 +0000 UTC" firstStartedPulling="2026-04-16 16:39:04.447486425 +0000 UTC m=+901.226333294" lastFinishedPulling="2026-04-16 16:39:05.696774645 +0000 UTC m=+902.475621513" observedRunningTime="2026-04-16 16:39:06.644939899 +0000 UTC m=+903.423786799" watchObservedRunningTime="2026-04-16 16:39:06.646599108 +0000 UTC m=+903.425445999" Apr 16 16:39:10.644724 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:10.644692 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" event={"ID":"5f84ec99-46fd-4625-93d6-f57df9101d43","Type":"ContainerStarted","Data":"64bd79fdd19f8228a74ed336e1d46a90704340399502662de7e05bf69ded095c"} Apr 16 16:39:10.645178 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:10.644754 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:39:10.646101 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:10.646077 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" event={"ID":"61b2ea7b-e3ee-4cd3-9d27-bfd7b024f79c","Type":"ContainerStarted","Data":"eff656559068d61c31cbb5c0b1578813f649c8a5e7e5e5bdacd54c974767f2cb"} Apr 16 16:39:10.646217 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:10.646198 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:10.676777 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:10.676720 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" podStartSLOduration=5.87315281 podStartE2EDuration="9.676706211s" podCreationTimestamp="2026-04-16 16:39:01 +0000 UTC" firstStartedPulling="2026-04-16 16:39:05.92697639 +0000 UTC m=+902.705823278" lastFinishedPulling="2026-04-16 16:39:09.73052981 +0000 UTC m=+906.509376679" observedRunningTime="2026-04-16 16:39:10.675669656 +0000 UTC m=+907.454516567" watchObservedRunningTime="2026-04-16 16:39:10.676706211 +0000 UTC m=+907.455553101" Apr 16 16:39:10.705830 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:10.705779 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" podStartSLOduration=5.2572253700000005 podStartE2EDuration="8.705767183s" podCreationTimestamp="2026-04-16 16:39:02 +0000 UTC" firstStartedPulling="2026-04-16 16:39:06.282234877 +0000 UTC m=+903.061081748" lastFinishedPulling="2026-04-16 16:39:09.730776687 +0000 UTC m=+906.509623561" observedRunningTime="2026-04-16 16:39:10.703867816 +0000 UTC m=+907.482714704" watchObservedRunningTime="2026-04-16 16:39:10.705767183 +0000 UTC m=+907.484614074" Apr 16 16:39:21.654291 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:21.654239 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b2jpg" Apr 16 16:39:22.610566 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:22.610535 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-z4t2v" Apr 16 16:39:27.635715 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:27.635683 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-s74x8" Apr 16 16:39:31.651941 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:39:31.651906 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-kdnsr" Apr 16 16:40:10.912483 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.912451 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-55c74f6fbc-kkzq2"] Apr 16 16:40:10.915771 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.915752 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:10.918743 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.918723 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 16 16:40:10.918829 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.918730 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 16 16:40:10.919458 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.919439 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 16 16:40:10.919922 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.919903 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-49np2\"" Apr 16 16:40:10.929527 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.929507 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-kltmk"] Apr 16 16:40:10.932493 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.932474 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-55c74f6fbc-kkzq2"] Apr 16 16:40:10.932599 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.932585 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" Apr 16 16:40:10.938037 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.938015 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-webhook-server-cert\"" Apr 16 16:40:10.946611 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.946587 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-kltmk"] Apr 16 16:40:10.954433 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:10.954412 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-controller-manager-dockercfg-226r8\"" Apr 16 16:40:11.026121 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.026090 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd521945-1fbc-4b2c-917c-1b4c1e668517-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-kltmk\" (UID: \"cd521945-1fbc-4b2c-917c-1b4c1e668517\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" Apr 16 16:40:11.026319 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.026165 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2dfde54-bffc-4885-a54a-49aeba809a14-cert\") pod \"kserve-controller-manager-55c74f6fbc-kkzq2\" (UID: \"a2dfde54-bffc-4885-a54a-49aeba809a14\") " pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:11.026414 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.026387 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq4dh\" (UniqueName: \"kubernetes.io/projected/a2dfde54-bffc-4885-a54a-49aeba809a14-kube-api-access-sq4dh\") pod \"kserve-controller-manager-55c74f6fbc-kkzq2\" (UID: \"a2dfde54-bffc-4885-a54a-49aeba809a14\") " pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:11.031282 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.027093 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvn8q\" (UniqueName: \"kubernetes.io/projected/cd521945-1fbc-4b2c-917c-1b4c1e668517-kube-api-access-dvn8q\") pod \"llmisvc-controller-manager-68cc5db7c4-kltmk\" (UID: \"cd521945-1fbc-4b2c-917c-1b4c1e668517\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" Apr 16 16:40:11.128381 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.128341 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2dfde54-bffc-4885-a54a-49aeba809a14-cert\") pod \"kserve-controller-manager-55c74f6fbc-kkzq2\" (UID: \"a2dfde54-bffc-4885-a54a-49aeba809a14\") " pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:11.128556 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.128399 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sq4dh\" (UniqueName: \"kubernetes.io/projected/a2dfde54-bffc-4885-a54a-49aeba809a14-kube-api-access-sq4dh\") pod \"kserve-controller-manager-55c74f6fbc-kkzq2\" (UID: \"a2dfde54-bffc-4885-a54a-49aeba809a14\") " pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:11.128556 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.128417 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dvn8q\" (UniqueName: \"kubernetes.io/projected/cd521945-1fbc-4b2c-917c-1b4c1e668517-kube-api-access-dvn8q\") pod \"llmisvc-controller-manager-68cc5db7c4-kltmk\" (UID: \"cd521945-1fbc-4b2c-917c-1b4c1e668517\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" Apr 16 16:40:11.128556 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.128467 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd521945-1fbc-4b2c-917c-1b4c1e668517-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-kltmk\" (UID: \"cd521945-1fbc-4b2c-917c-1b4c1e668517\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" Apr 16 16:40:11.130824 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.130798 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2dfde54-bffc-4885-a54a-49aeba809a14-cert\") pod \"kserve-controller-manager-55c74f6fbc-kkzq2\" (UID: \"a2dfde54-bffc-4885-a54a-49aeba809a14\") " pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:11.130942 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.130859 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd521945-1fbc-4b2c-917c-1b4c1e668517-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-kltmk\" (UID: \"cd521945-1fbc-4b2c-917c-1b4c1e668517\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" Apr 16 16:40:11.139758 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.139725 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvn8q\" (UniqueName: \"kubernetes.io/projected/cd521945-1fbc-4b2c-917c-1b4c1e668517-kube-api-access-dvn8q\") pod \"llmisvc-controller-manager-68cc5db7c4-kltmk\" (UID: \"cd521945-1fbc-4b2c-917c-1b4c1e668517\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" Apr 16 16:40:11.140163 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.140142 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq4dh\" (UniqueName: \"kubernetes.io/projected/a2dfde54-bffc-4885-a54a-49aeba809a14-kube-api-access-sq4dh\") pod \"kserve-controller-manager-55c74f6fbc-kkzq2\" (UID: \"a2dfde54-bffc-4885-a54a-49aeba809a14\") " pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:11.225872 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.225844 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:11.241881 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.241842 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" Apr 16 16:40:11.354700 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.354672 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-55c74f6fbc-kkzq2"] Apr 16 16:40:11.357794 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:40:11.357765 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2dfde54_bffc_4885_a54a_49aeba809a14.slice/crio-de6795756c2edc99f9bb1d4e516a08a5df581ac69920967d0f2f02ad712dcd62 WatchSource:0}: Error finding container de6795756c2edc99f9bb1d4e516a08a5df581ac69920967d0f2f02ad712dcd62: Status 404 returned error can't find the container with id de6795756c2edc99f9bb1d4e516a08a5df581ac69920967d0f2f02ad712dcd62 Apr 16 16:40:11.381008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.380985 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-kltmk"] Apr 16 16:40:11.383231 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:40:11.383207 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcd521945_1fbc_4b2c_917c_1b4c1e668517.slice/crio-21dca8be4ce475b1d081436e5df66d9aed4ada1f252e640f386e2daff083002c WatchSource:0}: Error finding container 21dca8be4ce475b1d081436e5df66d9aed4ada1f252e640f386e2daff083002c: Status 404 returned error can't find the container with id 21dca8be4ce475b1d081436e5df66d9aed4ada1f252e640f386e2daff083002c Apr 16 16:40:11.836600 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.836566 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" event={"ID":"a2dfde54-bffc-4885-a54a-49aeba809a14","Type":"ContainerStarted","Data":"de6795756c2edc99f9bb1d4e516a08a5df581ac69920967d0f2f02ad712dcd62"} Apr 16 16:40:11.837515 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:11.837481 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" event={"ID":"cd521945-1fbc-4b2c-917c-1b4c1e668517","Type":"ContainerStarted","Data":"21dca8be4ce475b1d081436e5df66d9aed4ada1f252e640f386e2daff083002c"} Apr 16 16:40:14.850328 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:14.850287 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" event={"ID":"cd521945-1fbc-4b2c-917c-1b4c1e668517","Type":"ContainerStarted","Data":"9c0bfda40077b1931528dd10ce06cca323839ef476823a5135305788a5bed20b"} Apr 16 16:40:14.850796 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:14.850438 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" Apr 16 16:40:14.851768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:14.851746 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" event={"ID":"a2dfde54-bffc-4885-a54a-49aeba809a14","Type":"ContainerStarted","Data":"30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85"} Apr 16 16:40:14.851882 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:14.851865 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:14.868017 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:14.867839 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" podStartSLOduration=1.5019152569999998 podStartE2EDuration="4.867820971s" podCreationTimestamp="2026-04-16 16:40:10 +0000 UTC" firstStartedPulling="2026-04-16 16:40:11.384528588 +0000 UTC m=+968.163375461" lastFinishedPulling="2026-04-16 16:40:14.750434292 +0000 UTC m=+971.529281175" observedRunningTime="2026-04-16 16:40:14.867509063 +0000 UTC m=+971.646355954" watchObservedRunningTime="2026-04-16 16:40:14.867820971 +0000 UTC m=+971.646667863" Apr 16 16:40:14.886164 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:14.886112 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" podStartSLOduration=1.488049965 podStartE2EDuration="4.886098049s" podCreationTimestamp="2026-04-16 16:40:10 +0000 UTC" firstStartedPulling="2026-04-16 16:40:11.35905334 +0000 UTC m=+968.137900214" lastFinishedPulling="2026-04-16 16:40:14.757101426 +0000 UTC m=+971.535948298" observedRunningTime="2026-04-16 16:40:14.884364211 +0000 UTC m=+971.663211100" watchObservedRunningTime="2026-04-16 16:40:14.886098049 +0000 UTC m=+971.664944976" Apr 16 16:40:45.857025 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:45.856994 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-kltmk" Apr 16 16:40:45.860031 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:45.860009 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:47.233002 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.232957 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-55c74f6fbc-kkzq2"] Apr 16 16:40:47.233511 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.233223 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" podUID="a2dfde54-bffc-4885-a54a-49aeba809a14" containerName="manager" containerID="cri-o://30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85" gracePeriod=10 Apr 16 16:40:47.261844 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.261812 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-55c74f6fbc-k7wrx"] Apr 16 16:40:47.265442 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.265425 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" Apr 16 16:40:47.277219 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.277192 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-55c74f6fbc-k7wrx"] Apr 16 16:40:47.344044 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.344011 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f20de96-0f1b-4f35-ac71-b9339e308f30-cert\") pod \"kserve-controller-manager-55c74f6fbc-k7wrx\" (UID: \"2f20de96-0f1b-4f35-ac71-b9339e308f30\") " pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" Apr 16 16:40:47.344193 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.344066 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kx8m\" (UniqueName: \"kubernetes.io/projected/2f20de96-0f1b-4f35-ac71-b9339e308f30-kube-api-access-4kx8m\") pod \"kserve-controller-manager-55c74f6fbc-k7wrx\" (UID: \"2f20de96-0f1b-4f35-ac71-b9339e308f30\") " pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" Apr 16 16:40:47.445059 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.445019 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f20de96-0f1b-4f35-ac71-b9339e308f30-cert\") pod \"kserve-controller-manager-55c74f6fbc-k7wrx\" (UID: \"2f20de96-0f1b-4f35-ac71-b9339e308f30\") " pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" Apr 16 16:40:47.445232 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.445067 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4kx8m\" (UniqueName: \"kubernetes.io/projected/2f20de96-0f1b-4f35-ac71-b9339e308f30-kube-api-access-4kx8m\") pod \"kserve-controller-manager-55c74f6fbc-k7wrx\" (UID: \"2f20de96-0f1b-4f35-ac71-b9339e308f30\") " pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" Apr 16 16:40:47.447414 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.447381 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f20de96-0f1b-4f35-ac71-b9339e308f30-cert\") pod \"kserve-controller-manager-55c74f6fbc-k7wrx\" (UID: \"2f20de96-0f1b-4f35-ac71-b9339e308f30\") " pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" Apr 16 16:40:47.455344 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.455315 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kx8m\" (UniqueName: \"kubernetes.io/projected/2f20de96-0f1b-4f35-ac71-b9339e308f30-kube-api-access-4kx8m\") pod \"kserve-controller-manager-55c74f6fbc-k7wrx\" (UID: \"2f20de96-0f1b-4f35-ac71-b9339e308f30\") " pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" Apr 16 16:40:47.473204 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.473182 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:47.546212 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.546125 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2dfde54-bffc-4885-a54a-49aeba809a14-cert\") pod \"a2dfde54-bffc-4885-a54a-49aeba809a14\" (UID: \"a2dfde54-bffc-4885-a54a-49aeba809a14\") " Apr 16 16:40:47.546212 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.546181 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq4dh\" (UniqueName: \"kubernetes.io/projected/a2dfde54-bffc-4885-a54a-49aeba809a14-kube-api-access-sq4dh\") pod \"a2dfde54-bffc-4885-a54a-49aeba809a14\" (UID: \"a2dfde54-bffc-4885-a54a-49aeba809a14\") " Apr 16 16:40:47.548325 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.548296 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2dfde54-bffc-4885-a54a-49aeba809a14-cert" (OuterVolumeSpecName: "cert") pod "a2dfde54-bffc-4885-a54a-49aeba809a14" (UID: "a2dfde54-bffc-4885-a54a-49aeba809a14"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:40:47.548325 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.548313 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2dfde54-bffc-4885-a54a-49aeba809a14-kube-api-access-sq4dh" (OuterVolumeSpecName: "kube-api-access-sq4dh") pod "a2dfde54-bffc-4885-a54a-49aeba809a14" (UID: "a2dfde54-bffc-4885-a54a-49aeba809a14"). InnerVolumeSpecName "kube-api-access-sq4dh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 16:40:47.624298 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.624234 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" Apr 16 16:40:47.647221 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.647187 2572 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2dfde54-bffc-4885-a54a-49aeba809a14-cert\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:40:47.647221 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.647224 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sq4dh\" (UniqueName: \"kubernetes.io/projected/a2dfde54-bffc-4885-a54a-49aeba809a14-kube-api-access-sq4dh\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:40:47.746888 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.746860 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-55c74f6fbc-k7wrx"] Apr 16 16:40:47.749043 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:40:47.749014 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f20de96_0f1b_4f35_ac71_b9339e308f30.slice/crio-ab10613477c83163ca0f3f36ffe68dd140e187e1ce6bc1223c346a022660dfcd WatchSource:0}: Error finding container ab10613477c83163ca0f3f36ffe68dd140e187e1ce6bc1223c346a022660dfcd: Status 404 returned error can't find the container with id ab10613477c83163ca0f3f36ffe68dd140e187e1ce6bc1223c346a022660dfcd Apr 16 16:40:47.963116 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.963075 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" event={"ID":"2f20de96-0f1b-4f35-ac71-b9339e308f30","Type":"ContainerStarted","Data":"ab10613477c83163ca0f3f36ffe68dd140e187e1ce6bc1223c346a022660dfcd"} Apr 16 16:40:47.964125 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.964096 2572 generic.go:358] "Generic (PLEG): container finished" podID="a2dfde54-bffc-4885-a54a-49aeba809a14" containerID="30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85" exitCode=0 Apr 16 16:40:47.964233 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.964132 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" event={"ID":"a2dfde54-bffc-4885-a54a-49aeba809a14","Type":"ContainerDied","Data":"30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85"} Apr 16 16:40:47.964233 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.964152 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" event={"ID":"a2dfde54-bffc-4885-a54a-49aeba809a14","Type":"ContainerDied","Data":"de6795756c2edc99f9bb1d4e516a08a5df581ac69920967d0f2f02ad712dcd62"} Apr 16 16:40:47.964233 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.964165 2572 scope.go:117] "RemoveContainer" containerID="30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85" Apr 16 16:40:47.964233 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.964165 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-55c74f6fbc-kkzq2" Apr 16 16:40:47.971950 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.971933 2572 scope.go:117] "RemoveContainer" containerID="30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85" Apr 16 16:40:47.972203 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:40:47.972184 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85\": container with ID starting with 30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85 not found: ID does not exist" containerID="30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85" Apr 16 16:40:47.972282 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.972213 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85"} err="failed to get container status \"30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85\": rpc error: code = NotFound desc = could not find container \"30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85\": container with ID starting with 30603fe263868e41359f4ec92eedd2f8dd4ed1909cd0872ca1752e8543296b85 not found: ID does not exist" Apr 16 16:40:47.989756 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.989714 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-55c74f6fbc-kkzq2"] Apr 16 16:40:47.989928 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:47.989778 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-55c74f6fbc-kkzq2"] Apr 16 16:40:48.969049 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:48.969011 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" event={"ID":"2f20de96-0f1b-4f35-ac71-b9339e308f30","Type":"ContainerStarted","Data":"a0b45322419f82c1693437591b6c079d2d1404185db0d3a46bed9f24aea6e5a3"} Apr 16 16:40:48.969519 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:48.969123 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" Apr 16 16:40:48.987055 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:48.987001 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" podStartSLOduration=1.602298429 podStartE2EDuration="1.986985006s" podCreationTimestamp="2026-04-16 16:40:47 +0000 UTC" firstStartedPulling="2026-04-16 16:40:47.750197944 +0000 UTC m=+1004.529044813" lastFinishedPulling="2026-04-16 16:40:48.134884517 +0000 UTC m=+1004.913731390" observedRunningTime="2026-04-16 16:40:48.986463911 +0000 UTC m=+1005.765310803" watchObservedRunningTime="2026-04-16 16:40:48.986985006 +0000 UTC m=+1005.765831897" Apr 16 16:40:49.909830 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:40:49.909795 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2dfde54-bffc-4885-a54a-49aeba809a14" path="/var/lib/kubelet/pods/a2dfde54-bffc-4885-a54a-49aeba809a14/volumes" Apr 16 16:41:19.978789 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:19.978760 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-55c74f6fbc-k7wrx" Apr 16 16:41:56.689358 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:56.689277 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769"] Apr 16 16:41:56.689829 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:56.689645 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a2dfde54-bffc-4885-a54a-49aeba809a14" containerName="manager" Apr 16 16:41:56.689829 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:56.689657 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2dfde54-bffc-4885-a54a-49aeba809a14" containerName="manager" Apr 16 16:41:56.689829 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:56.689721 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="a2dfde54-bffc-4885-a54a-49aeba809a14" containerName="manager" Apr 16 16:41:56.692520 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:56.692500 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" Apr 16 16:41:56.696723 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:56.696698 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-zzfkf\"" Apr 16 16:41:56.703274 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:56.703239 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" Apr 16 16:41:56.716222 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:56.716178 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769"] Apr 16 16:41:56.847809 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:56.847767 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769"] Apr 16 16:41:56.851681 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:41:56.851643 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d425764_a1b0_434e_a0a7_c85340787042.slice/crio-a9388302fe9030b65c669b6a8400c862722e2290be78ca460063a580506cc014 WatchSource:0}: Error finding container a9388302fe9030b65c669b6a8400c862722e2290be78ca460063a580506cc014: Status 404 returned error can't find the container with id a9388302fe9030b65c669b6a8400c862722e2290be78ca460063a580506cc014 Apr 16 16:41:57.191524 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:41:57.191487 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" event={"ID":"0d425764-a1b0-434e-a0a7-c85340787042","Type":"ContainerStarted","Data":"a9388302fe9030b65c669b6a8400c862722e2290be78ca460063a580506cc014"} Apr 16 16:42:09.241201 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:42:09.241163 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" event={"ID":"0d425764-a1b0-434e-a0a7-c85340787042","Type":"ContainerStarted","Data":"8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620"} Apr 16 16:42:09.241616 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:42:09.241360 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" Apr 16 16:42:09.242550 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:42:09.242526 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" podUID="0d425764-a1b0-434e-a0a7-c85340787042" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 16:42:09.262305 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:42:09.262260 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" podStartSLOduration=1.696419296 podStartE2EDuration="13.262231986s" podCreationTimestamp="2026-04-16 16:41:56 +0000 UTC" firstStartedPulling="2026-04-16 16:41:56.853325288 +0000 UTC m=+1073.632172157" lastFinishedPulling="2026-04-16 16:42:08.419137965 +0000 UTC m=+1085.197984847" observedRunningTime="2026-04-16 16:42:09.260332424 +0000 UTC m=+1086.039179314" watchObservedRunningTime="2026-04-16 16:42:09.262231986 +0000 UTC m=+1086.041078876" Apr 16 16:42:10.244629 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:42:10.244597 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" podUID="0d425764-a1b0-434e-a0a7-c85340787042" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 16:42:20.245405 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:42:20.245356 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" podUID="0d425764-a1b0-434e-a0a7-c85340787042" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 16:42:30.245113 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:42:30.245067 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" podUID="0d425764-a1b0-434e-a0a7-c85340787042" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 16:42:40.245275 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:42:40.245205 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" podUID="0d425764-a1b0-434e-a0a7-c85340787042" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 16:42:50.244768 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:42:50.244718 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" podUID="0d425764-a1b0-434e-a0a7-c85340787042" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.27:8080: connect: connection refused" Apr 16 16:43:00.246291 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:00.246234 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" Apr 16 16:43:16.485020 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.484982 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz"] Apr 16 16:43:16.488539 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.488517 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:16.492787 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.492764 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-9c599-kube-rbac-proxy-sar-config\"" Apr 16 16:43:16.492901 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.492771 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 16:43:16.493486 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.493466 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-9c599-serving-cert\"" Apr 16 16:43:16.503063 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.503036 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz"] Apr 16 16:43:16.619424 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.619390 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/269dbbe4-1334-42a2-a926-2b64a2566a66-openshift-service-ca-bundle\") pod \"switch-graph-9c599-57c9898f45-ckpsz\" (UID: \"269dbbe4-1334-42a2-a926-2b64a2566a66\") " pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:16.619602 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.619522 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/269dbbe4-1334-42a2-a926-2b64a2566a66-proxy-tls\") pod \"switch-graph-9c599-57c9898f45-ckpsz\" (UID: \"269dbbe4-1334-42a2-a926-2b64a2566a66\") " pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:16.719946 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.719894 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/269dbbe4-1334-42a2-a926-2b64a2566a66-proxy-tls\") pod \"switch-graph-9c599-57c9898f45-ckpsz\" (UID: \"269dbbe4-1334-42a2-a926-2b64a2566a66\") " pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:16.720129 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.719965 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/269dbbe4-1334-42a2-a926-2b64a2566a66-openshift-service-ca-bundle\") pod \"switch-graph-9c599-57c9898f45-ckpsz\" (UID: \"269dbbe4-1334-42a2-a926-2b64a2566a66\") " pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:16.720129 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:43:16.720061 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-9c599-serving-cert: secret "switch-graph-9c599-serving-cert" not found Apr 16 16:43:16.720205 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:43:16.720133 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/269dbbe4-1334-42a2-a926-2b64a2566a66-proxy-tls podName:269dbbe4-1334-42a2-a926-2b64a2566a66 nodeName:}" failed. No retries permitted until 2026-04-16 16:43:17.220116449 +0000 UTC m=+1153.998963317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/269dbbe4-1334-42a2-a926-2b64a2566a66-proxy-tls") pod "switch-graph-9c599-57c9898f45-ckpsz" (UID: "269dbbe4-1334-42a2-a926-2b64a2566a66") : secret "switch-graph-9c599-serving-cert" not found Apr 16 16:43:16.720640 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:16.720620 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/269dbbe4-1334-42a2-a926-2b64a2566a66-openshift-service-ca-bundle\") pod \"switch-graph-9c599-57c9898f45-ckpsz\" (UID: \"269dbbe4-1334-42a2-a926-2b64a2566a66\") " pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:17.224230 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:17.224191 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/269dbbe4-1334-42a2-a926-2b64a2566a66-proxy-tls\") pod \"switch-graph-9c599-57c9898f45-ckpsz\" (UID: \"269dbbe4-1334-42a2-a926-2b64a2566a66\") " pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:17.226625 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:17.226593 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/269dbbe4-1334-42a2-a926-2b64a2566a66-proxy-tls\") pod \"switch-graph-9c599-57c9898f45-ckpsz\" (UID: \"269dbbe4-1334-42a2-a926-2b64a2566a66\") " pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:17.398965 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:17.398921 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:17.529750 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:17.529723 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz"] Apr 16 16:43:17.532731 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:43:17.532695 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269dbbe4_1334_42a2_a926_2b64a2566a66.slice/crio-c0dde5a12470457f249e01c81f6bc30e6b3d02cbfd6df20ed0687f700c915163 WatchSource:0}: Error finding container c0dde5a12470457f249e01c81f6bc30e6b3d02cbfd6df20ed0687f700c915163: Status 404 returned error can't find the container with id c0dde5a12470457f249e01c81f6bc30e6b3d02cbfd6df20ed0687f700c915163 Apr 16 16:43:18.467321 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:18.467280 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" event={"ID":"269dbbe4-1334-42a2-a926-2b64a2566a66","Type":"ContainerStarted","Data":"c0dde5a12470457f249e01c81f6bc30e6b3d02cbfd6df20ed0687f700c915163"} Apr 16 16:43:20.476905 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:20.476868 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" event={"ID":"269dbbe4-1334-42a2-a926-2b64a2566a66","Type":"ContainerStarted","Data":"6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338"} Apr 16 16:43:20.477335 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:20.476927 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:20.497940 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:20.497887 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" podStartSLOduration=2.2634098959999998 podStartE2EDuration="4.497871739s" podCreationTimestamp="2026-04-16 16:43:16 +0000 UTC" firstStartedPulling="2026-04-16 16:43:17.534815986 +0000 UTC m=+1154.313662855" lastFinishedPulling="2026-04-16 16:43:19.769277825 +0000 UTC m=+1156.548124698" observedRunningTime="2026-04-16 16:43:20.496341188 +0000 UTC m=+1157.275188080" watchObservedRunningTime="2026-04-16 16:43:20.497871739 +0000 UTC m=+1157.276718630" Apr 16 16:43:26.486485 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:26.486402 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:31.655370 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:31.655340 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz"] Apr 16 16:43:31.655784 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:31.655579 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerName="switch-graph-9c599" containerID="cri-o://6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338" gracePeriod=30 Apr 16 16:43:31.802544 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:31.802508 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769"] Apr 16 16:43:31.802824 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:31.802797 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" podUID="0d425764-a1b0-434e-a0a7-c85340787042" containerName="kserve-container" containerID="cri-o://8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620" gracePeriod=30 Apr 16 16:43:31.851486 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:31.851445 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h"] Apr 16 16:43:31.854626 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:31.854610 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" Apr 16 16:43:31.864396 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:31.864366 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h"] Apr 16 16:43:31.869002 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:31.868980 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" Apr 16 16:43:32.006763 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:32.006735 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h"] Apr 16 16:43:32.008943 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:43:32.008915 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf100a932_f599_4d75_b479_7e63d1d0dcbc.slice/crio-ba5e1ae64729b9f09e42408b282737c532fbef6e5994e879cb5436b9535d8413 WatchSource:0}: Error finding container ba5e1ae64729b9f09e42408b282737c532fbef6e5994e879cb5436b9535d8413: Status 404 returned error can't find the container with id ba5e1ae64729b9f09e42408b282737c532fbef6e5994e879cb5436b9535d8413 Apr 16 16:43:32.515289 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:32.515228 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" event={"ID":"f100a932-f599-4d75-b479-7e63d1d0dcbc","Type":"ContainerStarted","Data":"214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20"} Apr 16 16:43:32.515485 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:32.515293 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" event={"ID":"f100a932-f599-4d75-b479-7e63d1d0dcbc","Type":"ContainerStarted","Data":"ba5e1ae64729b9f09e42408b282737c532fbef6e5994e879cb5436b9535d8413"} Apr 16 16:43:32.515594 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:32.515572 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" Apr 16 16:43:32.516744 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:32.516719 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.29:8080: connect: connection refused" Apr 16 16:43:33.518231 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:33.518191 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.29:8080: connect: connection refused" Apr 16 16:43:35.054545 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.054519 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" Apr 16 16:43:35.080899 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.080791 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" podStartSLOduration=4.080770679 podStartE2EDuration="4.080770679s" podCreationTimestamp="2026-04-16 16:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:43:32.557624716 +0000 UTC m=+1169.336471606" watchObservedRunningTime="2026-04-16 16:43:35.080770679 +0000 UTC m=+1171.859617573" Apr 16 16:43:35.525342 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.525306 2572 generic.go:358] "Generic (PLEG): container finished" podID="0d425764-a1b0-434e-a0a7-c85340787042" containerID="8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620" exitCode=0 Apr 16 16:43:35.525526 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.525371 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" Apr 16 16:43:35.525526 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.525395 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" event={"ID":"0d425764-a1b0-434e-a0a7-c85340787042","Type":"ContainerDied","Data":"8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620"} Apr 16 16:43:35.525526 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.525434 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769" event={"ID":"0d425764-a1b0-434e-a0a7-c85340787042","Type":"ContainerDied","Data":"a9388302fe9030b65c669b6a8400c862722e2290be78ca460063a580506cc014"} Apr 16 16:43:35.525526 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.525450 2572 scope.go:117] "RemoveContainer" containerID="8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620" Apr 16 16:43:35.534874 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.534854 2572 scope.go:117] "RemoveContainer" containerID="8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620" Apr 16 16:43:35.535138 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:43:35.535119 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620\": container with ID starting with 8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620 not found: ID does not exist" containerID="8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620" Apr 16 16:43:35.535206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.535152 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620"} err="failed to get container status \"8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620\": rpc error: code = NotFound desc = could not find container \"8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620\": container with ID starting with 8484ed50498847326bd1135e1ab0f906735daaafc8ba749dc681dbaebd415620 not found: ID does not exist" Apr 16 16:43:35.551931 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.551902 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769"] Apr 16 16:43:35.555926 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.555899 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9c599-predictor-546d788cd9-7c769"] Apr 16 16:43:35.909417 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:35.909338 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d425764-a1b0-434e-a0a7-c85340787042" path="/var/lib/kubelet/pods/0d425764-a1b0-434e-a0a7-c85340787042/volumes" Apr 16 16:43:36.483796 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:36.483759 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerName="switch-graph-9c599" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:43:41.484709 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:41.484663 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerName="switch-graph-9c599" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:43:43.518971 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:43.518929 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.29:8080: connect: connection refused" Apr 16 16:43:46.484042 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:46.484000 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerName="switch-graph-9c599" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:43:46.484457 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:46.484116 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:43:51.484086 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:51.484046 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerName="switch-graph-9c599" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:43:53.518444 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:53.518402 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.29:8080: connect: connection refused" Apr 16 16:43:56.408054 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.408015 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv"] Apr 16 16:43:56.408481 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.408412 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d425764-a1b0-434e-a0a7-c85340787042" containerName="kserve-container" Apr 16 16:43:56.408481 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.408427 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d425764-a1b0-434e-a0a7-c85340787042" containerName="kserve-container" Apr 16 16:43:56.408576 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.408485 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="0d425764-a1b0-434e-a0a7-c85340787042" containerName="kserve-container" Apr 16 16:43:56.411877 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.411860 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:43:56.414501 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.414475 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-serving-cert\"" Apr 16 16:43:56.414624 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.414543 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-kube-rbac-proxy-sar-config\"" Apr 16 16:43:56.424896 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.424872 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv"] Apr 16 16:43:56.457711 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.457667 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e269716-9684-4956-b65c-7817600e2419-openshift-service-ca-bundle\") pod \"model-chainer-cb97498c4-rrszv\" (UID: \"3e269716-9684-4956-b65c-7817600e2419\") " pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:43:56.457887 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.457786 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e269716-9684-4956-b65c-7817600e2419-proxy-tls\") pod \"model-chainer-cb97498c4-rrszv\" (UID: \"3e269716-9684-4956-b65c-7817600e2419\") " pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:43:56.484259 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.484208 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerName="switch-graph-9c599" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:43:56.559203 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.559161 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e269716-9684-4956-b65c-7817600e2419-proxy-tls\") pod \"model-chainer-cb97498c4-rrszv\" (UID: \"3e269716-9684-4956-b65c-7817600e2419\") " pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:43:56.559392 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.559222 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e269716-9684-4956-b65c-7817600e2419-openshift-service-ca-bundle\") pod \"model-chainer-cb97498c4-rrszv\" (UID: \"3e269716-9684-4956-b65c-7817600e2419\") " pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:43:56.559392 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:43:56.559343 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/model-chainer-serving-cert: secret "model-chainer-serving-cert" not found Apr 16 16:43:56.559464 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:43:56.559421 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e269716-9684-4956-b65c-7817600e2419-proxy-tls podName:3e269716-9684-4956-b65c-7817600e2419 nodeName:}" failed. No retries permitted until 2026-04-16 16:43:57.059405198 +0000 UTC m=+1193.838252067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/3e269716-9684-4956-b65c-7817600e2419-proxy-tls") pod "model-chainer-cb97498c4-rrszv" (UID: "3e269716-9684-4956-b65c-7817600e2419") : secret "model-chainer-serving-cert" not found Apr 16 16:43:56.559833 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:56.559816 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e269716-9684-4956-b65c-7817600e2419-openshift-service-ca-bundle\") pod \"model-chainer-cb97498c4-rrszv\" (UID: \"3e269716-9684-4956-b65c-7817600e2419\") " pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:43:57.063959 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:57.063921 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e269716-9684-4956-b65c-7817600e2419-proxy-tls\") pod \"model-chainer-cb97498c4-rrszv\" (UID: \"3e269716-9684-4956-b65c-7817600e2419\") " pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:43:57.066381 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:57.066348 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e269716-9684-4956-b65c-7817600e2419-proxy-tls\") pod \"model-chainer-cb97498c4-rrszv\" (UID: \"3e269716-9684-4956-b65c-7817600e2419\") " pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:43:57.322502 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:57.322404 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:43:57.447723 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:57.447698 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv"] Apr 16 16:43:57.450343 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:43:57.450317 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e269716_9684_4956_b65c_7817600e2419.slice/crio-d8dd606c07ed06ad9c53bb6a8e52bc38025e21f93e1af19a5fde726deebbbf4d WatchSource:0}: Error finding container d8dd606c07ed06ad9c53bb6a8e52bc38025e21f93e1af19a5fde726deebbbf4d: Status 404 returned error can't find the container with id d8dd606c07ed06ad9c53bb6a8e52bc38025e21f93e1af19a5fde726deebbbf4d Apr 16 16:43:57.452135 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:57.452119 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 16:43:57.599060 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:57.598975 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" event={"ID":"3e269716-9684-4956-b65c-7817600e2419","Type":"ContainerStarted","Data":"d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192"} Apr 16 16:43:57.599060 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:57.599009 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" event={"ID":"3e269716-9684-4956-b65c-7817600e2419","Type":"ContainerStarted","Data":"d8dd606c07ed06ad9c53bb6a8e52bc38025e21f93e1af19a5fde726deebbbf4d"} Apr 16 16:43:57.599060 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:57.599051 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:43:57.616433 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:43:57.616368 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" podStartSLOduration=1.616353487 podStartE2EDuration="1.616353487s" podCreationTimestamp="2026-04-16 16:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:43:57.615065616 +0000 UTC m=+1194.393912507" watchObservedRunningTime="2026-04-16 16:43:57.616353487 +0000 UTC m=+1194.395200377" Apr 16 16:44:01.484259 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:01.484216 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerName="switch-graph-9c599" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:44:01.799014 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:01.798992 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:44:01.907759 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:01.907726 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/269dbbe4-1334-42a2-a926-2b64a2566a66-openshift-service-ca-bundle\") pod \"269dbbe4-1334-42a2-a926-2b64a2566a66\" (UID: \"269dbbe4-1334-42a2-a926-2b64a2566a66\") " Apr 16 16:44:01.907935 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:01.907791 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/269dbbe4-1334-42a2-a926-2b64a2566a66-proxy-tls\") pod \"269dbbe4-1334-42a2-a926-2b64a2566a66\" (UID: \"269dbbe4-1334-42a2-a926-2b64a2566a66\") " Apr 16 16:44:01.908140 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:01.908112 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/269dbbe4-1334-42a2-a926-2b64a2566a66-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "269dbbe4-1334-42a2-a926-2b64a2566a66" (UID: "269dbbe4-1334-42a2-a926-2b64a2566a66"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:44:01.910113 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:01.910088 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/269dbbe4-1334-42a2-a926-2b64a2566a66-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "269dbbe4-1334-42a2-a926-2b64a2566a66" (UID: "269dbbe4-1334-42a2-a926-2b64a2566a66"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:44:02.008767 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.008734 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/269dbbe4-1334-42a2-a926-2b64a2566a66-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:44:02.008767 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.008762 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/269dbbe4-1334-42a2-a926-2b64a2566a66-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:44:02.617272 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.617217 2572 generic.go:358] "Generic (PLEG): container finished" podID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerID="6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338" exitCode=0 Apr 16 16:44:02.617708 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.617290 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" Apr 16 16:44:02.617708 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.617282 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" event={"ID":"269dbbe4-1334-42a2-a926-2b64a2566a66","Type":"ContainerDied","Data":"6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338"} Apr 16 16:44:02.617708 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.617347 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz" event={"ID":"269dbbe4-1334-42a2-a926-2b64a2566a66","Type":"ContainerDied","Data":"c0dde5a12470457f249e01c81f6bc30e6b3d02cbfd6df20ed0687f700c915163"} Apr 16 16:44:02.617708 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.617362 2572 scope.go:117] "RemoveContainer" containerID="6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338" Apr 16 16:44:02.625403 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.625365 2572 scope.go:117] "RemoveContainer" containerID="6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338" Apr 16 16:44:02.625621 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:44:02.625603 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338\": container with ID starting with 6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338 not found: ID does not exist" containerID="6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338" Apr 16 16:44:02.625682 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.625634 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338"} err="failed to get container status \"6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338\": rpc error: code = NotFound desc = could not find container \"6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338\": container with ID starting with 6f70ebe11915ff455232223da579aa7c21161b242545822319b4379bc539e338 not found: ID does not exist" Apr 16 16:44:02.634586 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.634563 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz"] Apr 16 16:44:02.637983 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:02.637962 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-9c599-57c9898f45-ckpsz"] Apr 16 16:44:03.518577 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:03.518536 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.29:8080: connect: connection refused" Apr 16 16:44:03.609347 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:03.609305 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:44:03.912585 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:03.912559 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" path="/var/lib/kubelet/pods/269dbbe4-1334-42a2-a926-2b64a2566a66/volumes" Apr 16 16:44:06.465818 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:06.465784 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv"] Apr 16 16:44:06.466220 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:06.466008 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" podUID="3e269716-9684-4956-b65c-7817600e2419" containerName="model-chainer" containerID="cri-o://d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192" gracePeriod=30 Apr 16 16:44:06.619754 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:06.619723 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84"] Apr 16 16:44:06.620228 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:06.620207 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerName="switch-graph-9c599" Apr 16 16:44:06.620228 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:06.620229 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerName="switch-graph-9c599" Apr 16 16:44:06.620419 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:06.620361 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="269dbbe4-1334-42a2-a926-2b64a2566a66" containerName="switch-graph-9c599" Apr 16 16:44:06.624781 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:06.624758 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" Apr 16 16:44:06.632299 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:06.632269 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84"] Apr 16 16:44:06.638140 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:06.638119 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" Apr 16 16:44:06.778948 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:06.778923 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84"] Apr 16 16:44:06.781139 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:44:06.781103 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaca8fe33_d187_45b7_b520_fcde24ee2234.slice/crio-985d3c3ff6d53d27ed6b544ca506a3c33c1a0a122449391ac53fb267c52d4844 WatchSource:0}: Error finding container 985d3c3ff6d53d27ed6b544ca506a3c33c1a0a122449391ac53fb267c52d4844: Status 404 returned error can't find the container with id 985d3c3ff6d53d27ed6b544ca506a3c33c1a0a122449391ac53fb267c52d4844 Apr 16 16:44:07.639442 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:07.639408 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" event={"ID":"aca8fe33-d187-45b7-b520-fcde24ee2234","Type":"ContainerStarted","Data":"1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134"} Apr 16 16:44:07.639867 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:07.639450 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" event={"ID":"aca8fe33-d187-45b7-b520-fcde24ee2234","Type":"ContainerStarted","Data":"985d3c3ff6d53d27ed6b544ca506a3c33c1a0a122449391ac53fb267c52d4844"} Apr 16 16:44:07.639867 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:07.639470 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" Apr 16 16:44:07.640900 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:07.640871 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 16:44:07.657640 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:07.657592 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" podStartSLOduration=1.6575744700000001 podStartE2EDuration="1.65757447s" podCreationTimestamp="2026-04-16 16:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:44:07.655721129 +0000 UTC m=+1204.434568023" watchObservedRunningTime="2026-04-16 16:44:07.65757447 +0000 UTC m=+1204.436421360" Apr 16 16:44:08.606965 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:08.606926 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" podUID="3e269716-9684-4956-b65c-7817600e2419" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:44:08.643306 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:08.643266 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 16:44:13.518641 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:13.518592 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.29:8080: connect: connection refused" Apr 16 16:44:13.606601 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:13.606564 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" podUID="3e269716-9684-4956-b65c-7817600e2419" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:44:18.606354 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:18.606316 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" podUID="3e269716-9684-4956-b65c-7817600e2419" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:44:18.606746 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:18.606422 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:44:18.643792 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:18.643750 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 16:44:23.519502 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:23.519458 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" Apr 16 16:44:23.607187 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:23.607139 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" podUID="3e269716-9684-4956-b65c-7817600e2419" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:44:28.606828 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:28.606786 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" podUID="3e269716-9684-4956-b65c-7817600e2419" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:44:28.643630 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:28.643586 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 16:44:33.606309 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:33.606262 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" podUID="3e269716-9684-4956-b65c-7817600e2419" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:44:36.609349 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.609326 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:44:36.706796 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.706764 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e269716-9684-4956-b65c-7817600e2419-proxy-tls\") pod \"3e269716-9684-4956-b65c-7817600e2419\" (UID: \"3e269716-9684-4956-b65c-7817600e2419\") " Apr 16 16:44:36.706957 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.706856 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e269716-9684-4956-b65c-7817600e2419-openshift-service-ca-bundle\") pod \"3e269716-9684-4956-b65c-7817600e2419\" (UID: \"3e269716-9684-4956-b65c-7817600e2419\") " Apr 16 16:44:36.707221 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.707185 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e269716-9684-4956-b65c-7817600e2419-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "3e269716-9684-4956-b65c-7817600e2419" (UID: "3e269716-9684-4956-b65c-7817600e2419"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:44:36.708797 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.708775 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e269716-9684-4956-b65c-7817600e2419-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "3e269716-9684-4956-b65c-7817600e2419" (UID: "3e269716-9684-4956-b65c-7817600e2419"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:44:36.735996 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.735915 2572 generic.go:358] "Generic (PLEG): container finished" podID="3e269716-9684-4956-b65c-7817600e2419" containerID="d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192" exitCode=0 Apr 16 16:44:36.735996 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.735977 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" Apr 16 16:44:36.736165 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.736018 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" event={"ID":"3e269716-9684-4956-b65c-7817600e2419","Type":"ContainerDied","Data":"d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192"} Apr 16 16:44:36.736165 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.736068 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv" event={"ID":"3e269716-9684-4956-b65c-7817600e2419","Type":"ContainerDied","Data":"d8dd606c07ed06ad9c53bb6a8e52bc38025e21f93e1af19a5fde726deebbbf4d"} Apr 16 16:44:36.736165 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.736090 2572 scope.go:117] "RemoveContainer" containerID="d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192" Apr 16 16:44:36.744317 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.744294 2572 scope.go:117] "RemoveContainer" containerID="d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192" Apr 16 16:44:36.744599 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:44:36.744579 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192\": container with ID starting with d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192 not found: ID does not exist" containerID="d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192" Apr 16 16:44:36.744661 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.744608 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192"} err="failed to get container status \"d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192\": rpc error: code = NotFound desc = could not find container \"d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192\": container with ID starting with d2d41783451998028921fe8887fd8fabd2e5c5d7ffed7ff255e0379037f91192 not found: ID does not exist" Apr 16 16:44:36.757951 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.757920 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv"] Apr 16 16:44:36.762512 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.762483 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-cb97498c4-rrszv"] Apr 16 16:44:36.807648 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.807617 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e269716-9684-4956-b65c-7817600e2419-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:44:36.807648 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:36.807646 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3e269716-9684-4956-b65c-7817600e2419-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:44:37.910376 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:37.910346 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e269716-9684-4956-b65c-7817600e2419" path="/var/lib/kubelet/pods/3e269716-9684-4956-b65c-7817600e2419/volumes" Apr 16 16:44:38.644021 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:38.643975 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 16:44:41.941096 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:41.941066 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss"] Apr 16 16:44:41.941492 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:41.941442 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e269716-9684-4956-b65c-7817600e2419" containerName="model-chainer" Apr 16 16:44:41.941492 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:41.941455 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e269716-9684-4956-b65c-7817600e2419" containerName="model-chainer" Apr 16 16:44:41.941570 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:41.941506 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="3e269716-9684-4956-b65c-7817600e2419" containerName="model-chainer" Apr 16 16:44:41.945759 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:41.945737 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:41.948428 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:41.948404 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-df178-kube-rbac-proxy-sar-config\"" Apr 16 16:44:41.948643 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:41.948631 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 16:44:41.948893 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:41.948881 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-df178-serving-cert\"" Apr 16 16:44:41.958307 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:41.958284 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss"] Apr 16 16:44:42.052159 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:42.052125 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71258a00-ac83-4668-b057-9eef513ff832-openshift-service-ca-bundle\") pod \"switch-graph-df178-67f9fc49fb-mccss\" (UID: \"71258a00-ac83-4668-b057-9eef513ff832\") " pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:42.052342 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:42.052202 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71258a00-ac83-4668-b057-9eef513ff832-proxy-tls\") pod \"switch-graph-df178-67f9fc49fb-mccss\" (UID: \"71258a00-ac83-4668-b057-9eef513ff832\") " pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:42.153667 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:42.153621 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71258a00-ac83-4668-b057-9eef513ff832-proxy-tls\") pod \"switch-graph-df178-67f9fc49fb-mccss\" (UID: \"71258a00-ac83-4668-b057-9eef513ff832\") " pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:42.153855 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:42.153698 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71258a00-ac83-4668-b057-9eef513ff832-openshift-service-ca-bundle\") pod \"switch-graph-df178-67f9fc49fb-mccss\" (UID: \"71258a00-ac83-4668-b057-9eef513ff832\") " pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:42.153855 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:44:42.153778 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-df178-serving-cert: secret "switch-graph-df178-serving-cert" not found Apr 16 16:44:42.154337 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:44:42.154284 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71258a00-ac83-4668-b057-9eef513ff832-proxy-tls podName:71258a00-ac83-4668-b057-9eef513ff832 nodeName:}" failed. No retries permitted until 2026-04-16 16:44:42.654229919 +0000 UTC m=+1239.433076809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/71258a00-ac83-4668-b057-9eef513ff832-proxy-tls") pod "switch-graph-df178-67f9fc49fb-mccss" (UID: "71258a00-ac83-4668-b057-9eef513ff832") : secret "switch-graph-df178-serving-cert" not found Apr 16 16:44:42.159380 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:42.154901 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71258a00-ac83-4668-b057-9eef513ff832-openshift-service-ca-bundle\") pod \"switch-graph-df178-67f9fc49fb-mccss\" (UID: \"71258a00-ac83-4668-b057-9eef513ff832\") " pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:42.659879 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:42.659835 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71258a00-ac83-4668-b057-9eef513ff832-proxy-tls\") pod \"switch-graph-df178-67f9fc49fb-mccss\" (UID: \"71258a00-ac83-4668-b057-9eef513ff832\") " pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:42.662311 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:42.662285 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71258a00-ac83-4668-b057-9eef513ff832-proxy-tls\") pod \"switch-graph-df178-67f9fc49fb-mccss\" (UID: \"71258a00-ac83-4668-b057-9eef513ff832\") " pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:42.855881 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:42.855837 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:42.984145 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:42.984116 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss"] Apr 16 16:44:42.986474 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:44:42.986444 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71258a00_ac83_4668_b057_9eef513ff832.slice/crio-4c61b7c34727aa1b4ed156e2835f48f3e9fcc97d84cebe4bfd5894d15b1176b4 WatchSource:0}: Error finding container 4c61b7c34727aa1b4ed156e2835f48f3e9fcc97d84cebe4bfd5894d15b1176b4: Status 404 returned error can't find the container with id 4c61b7c34727aa1b4ed156e2835f48f3e9fcc97d84cebe4bfd5894d15b1176b4 Apr 16 16:44:43.762056 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:43.762021 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" event={"ID":"71258a00-ac83-4668-b057-9eef513ff832","Type":"ContainerStarted","Data":"ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5"} Apr 16 16:44:43.762056 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:43.762056 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" event={"ID":"71258a00-ac83-4668-b057-9eef513ff832","Type":"ContainerStarted","Data":"4c61b7c34727aa1b4ed156e2835f48f3e9fcc97d84cebe4bfd5894d15b1176b4"} Apr 16 16:44:43.762312 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:43.762160 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:43.793506 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:43.793450 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" podStartSLOduration=2.793436701 podStartE2EDuration="2.793436701s" podCreationTimestamp="2026-04-16 16:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:44:43.791706994 +0000 UTC m=+1240.570553897" watchObservedRunningTime="2026-04-16 16:44:43.793436701 +0000 UTC m=+1240.572283592" Apr 16 16:44:48.643784 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:48.643743 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.31:8080: connect: connection refused" Apr 16 16:44:49.771126 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:49.771093 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:44:58.645104 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:44:58.645018 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" Apr 16 16:45:16.667939 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:16.667897 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd"] Apr 16 16:45:16.671977 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:16.671957 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:45:16.674653 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:16.674630 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-39f4a-kube-rbac-proxy-sar-config\"" Apr 16 16:45:16.674766 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:16.674668 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-39f4a-serving-cert\"" Apr 16 16:45:16.679093 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:16.678936 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd"] Apr 16 16:45:16.863877 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:16.863838 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-openshift-service-ca-bundle\") pod \"sequence-graph-39f4a-89d49ccf6-5mbjd\" (UID: \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\") " pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:45:16.864052 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:16.863895 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-proxy-tls\") pod \"sequence-graph-39f4a-89d49ccf6-5mbjd\" (UID: \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\") " pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:45:16.964690 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:16.964657 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-openshift-service-ca-bundle\") pod \"sequence-graph-39f4a-89d49ccf6-5mbjd\" (UID: \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\") " pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:45:16.964858 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:16.964715 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-proxy-tls\") pod \"sequence-graph-39f4a-89d49ccf6-5mbjd\" (UID: \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\") " pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:45:16.964858 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:45:16.964851 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/sequence-graph-39f4a-serving-cert: secret "sequence-graph-39f4a-serving-cert" not found Apr 16 16:45:16.964934 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:45:16.964904 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-proxy-tls podName:cc5b1b60-9892-47e3-b0b6-e0f3f8918f52 nodeName:}" failed. No retries permitted until 2026-04-16 16:45:17.464889116 +0000 UTC m=+1274.243735985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-proxy-tls") pod "sequence-graph-39f4a-89d49ccf6-5mbjd" (UID: "cc5b1b60-9892-47e3-b0b6-e0f3f8918f52") : secret "sequence-graph-39f4a-serving-cert" not found Apr 16 16:45:16.965326 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:16.965306 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-openshift-service-ca-bundle\") pod \"sequence-graph-39f4a-89d49ccf6-5mbjd\" (UID: \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\") " pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:45:17.470619 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:17.470580 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-proxy-tls\") pod \"sequence-graph-39f4a-89d49ccf6-5mbjd\" (UID: \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\") " pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:45:17.473014 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:17.472987 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-proxy-tls\") pod \"sequence-graph-39f4a-89d49ccf6-5mbjd\" (UID: \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\") " pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:45:17.583788 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:17.583750 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:45:17.710314 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:17.710263 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd"] Apr 16 16:45:17.713062 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:45:17.713033 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc5b1b60_9892_47e3_b0b6_e0f3f8918f52.slice/crio-7a840c8bc3ddf534ea2627e3a1132e623dd316ad1a714d7b1166d3c799adf495 WatchSource:0}: Error finding container 7a840c8bc3ddf534ea2627e3a1132e623dd316ad1a714d7b1166d3c799adf495: Status 404 returned error can't find the container with id 7a840c8bc3ddf534ea2627e3a1132e623dd316ad1a714d7b1166d3c799adf495 Apr 16 16:45:17.883220 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:17.883184 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" event={"ID":"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52","Type":"ContainerStarted","Data":"c1f9d5ffe4a980deb257ea50569ef912dfd4e0caa6a4e3b0f1da613e82722865"} Apr 16 16:45:17.883403 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:17.883227 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" event={"ID":"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52","Type":"ContainerStarted","Data":"7a840c8bc3ddf534ea2627e3a1132e623dd316ad1a714d7b1166d3c799adf495"} Apr 16 16:45:17.883403 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:17.883279 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:45:17.903238 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:17.903181 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" podStartSLOduration=1.903165934 podStartE2EDuration="1.903165934s" podCreationTimestamp="2026-04-16 16:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:45:17.900842217 +0000 UTC m=+1274.679689105" watchObservedRunningTime="2026-04-16 16:45:17.903165934 +0000 UTC m=+1274.682012880" Apr 16 16:45:23.891914 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:45:23.891885 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:52:56.606873 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:56.606838 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss"] Apr 16 16:52:56.609348 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:56.607144 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" podUID="71258a00-ac83-4668-b057-9eef513ff832" containerName="switch-graph-df178" containerID="cri-o://ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5" gracePeriod=30 Apr 16 16:52:56.760686 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:56.760645 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h"] Apr 16 16:52:56.760943 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:56.760895 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerName="kserve-container" containerID="cri-o://214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20" gracePeriod=30 Apr 16 16:52:56.993216 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:56.993183 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql"] Apr 16 16:52:56.996315 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:56.996297 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" Apr 16 16:52:57.005987 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:57.005964 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" Apr 16 16:52:57.022787 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:57.022751 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql"] Apr 16 16:52:57.145178 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:57.145137 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql"] Apr 16 16:52:57.148439 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:52:57.148399 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bf148ef_251a_42b6_ad49_1165aab7abd9.slice/crio-d17ddfaa7e7c6cc814b26f9452f4917081d243d96644a469051a21204d9e8913 WatchSource:0}: Error finding container d17ddfaa7e7c6cc814b26f9452f4917081d243d96644a469051a21204d9e8913: Status 404 returned error can't find the container with id d17ddfaa7e7c6cc814b26f9452f4917081d243d96644a469051a21204d9e8913 Apr 16 16:52:57.150201 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:57.150177 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 16:52:57.416359 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:57.416278 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" event={"ID":"7bf148ef-251a-42b6-ad49-1165aab7abd9","Type":"ContainerStarted","Data":"3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609"} Apr 16 16:52:57.416359 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:57.416316 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" event={"ID":"7bf148ef-251a-42b6-ad49-1165aab7abd9","Type":"ContainerStarted","Data":"d17ddfaa7e7c6cc814b26f9452f4917081d243d96644a469051a21204d9e8913"} Apr 16 16:52:57.416566 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:57.416429 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" Apr 16 16:52:57.417828 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:57.417805 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 16:52:58.420028 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:58.419990 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 16:52:59.770463 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:52:59.770397 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" podUID="71258a00-ac83-4668-b057-9eef513ff832" containerName="switch-graph-df178" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:00.010008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.009985 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" Apr 16 16:53:00.029033 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.028930 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" podStartSLOduration=4.028911701 podStartE2EDuration="4.028911701s" podCreationTimestamp="2026-04-16 16:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:52:57.438621381 +0000 UTC m=+1734.217468272" watchObservedRunningTime="2026-04-16 16:53:00.028911701 +0000 UTC m=+1736.807758593" Apr 16 16:53:00.427173 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.427077 2572 generic.go:358] "Generic (PLEG): container finished" podID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerID="214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20" exitCode=0 Apr 16 16:53:00.427173 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.427144 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" Apr 16 16:53:00.427426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.427148 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" event={"ID":"f100a932-f599-4d75-b479-7e63d1d0dcbc","Type":"ContainerDied","Data":"214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20"} Apr 16 16:53:00.427426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.427278 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h" event={"ID":"f100a932-f599-4d75-b479-7e63d1d0dcbc","Type":"ContainerDied","Data":"ba5e1ae64729b9f09e42408b282737c532fbef6e5994e879cb5436b9535d8413"} Apr 16 16:53:00.427426 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.427301 2572 scope.go:117] "RemoveContainer" containerID="214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20" Apr 16 16:53:00.435771 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.435752 2572 scope.go:117] "RemoveContainer" containerID="214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20" Apr 16 16:53:00.436053 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:53:00.436026 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20\": container with ID starting with 214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20 not found: ID does not exist" containerID="214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20" Apr 16 16:53:00.436136 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.436061 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20"} err="failed to get container status \"214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20\": rpc error: code = NotFound desc = could not find container \"214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20\": container with ID starting with 214d884dea9c96536794aac46e44962b0d8f8c5dd3cda66b570cbe911fb26c20 not found: ID does not exist" Apr 16 16:53:00.449023 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.448995 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h"] Apr 16 16:53:00.453148 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:00.453120 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df178-predictor-758f86b6c9-r4h2h"] Apr 16 16:53:01.909700 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:01.909667 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" path="/var/lib/kubelet/pods/f100a932-f599-4d75-b479-7e63d1d0dcbc/volumes" Apr 16 16:53:04.769726 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:04.769683 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" podUID="71258a00-ac83-4668-b057-9eef513ff832" containerName="switch-graph-df178" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:08.420607 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:08.420570 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 16:53:09.769525 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:09.769482 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" podUID="71258a00-ac83-4668-b057-9eef513ff832" containerName="switch-graph-df178" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:09.769957 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:09.769609 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:53:14.770805 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:14.770753 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" podUID="71258a00-ac83-4668-b057-9eef513ff832" containerName="switch-graph-df178" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:18.420548 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:18.420506 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 16:53:19.769957 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:19.769917 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" podUID="71258a00-ac83-4668-b057-9eef513ff832" containerName="switch-graph-df178" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:24.770131 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:24.770091 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" podUID="71258a00-ac83-4668-b057-9eef513ff832" containerName="switch-graph-df178" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:26.757416 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:26.757386 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:53:26.928342 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:26.928231 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71258a00-ac83-4668-b057-9eef513ff832-openshift-service-ca-bundle\") pod \"71258a00-ac83-4668-b057-9eef513ff832\" (UID: \"71258a00-ac83-4668-b057-9eef513ff832\") " Apr 16 16:53:26.928526 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:26.928352 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71258a00-ac83-4668-b057-9eef513ff832-proxy-tls\") pod \"71258a00-ac83-4668-b057-9eef513ff832\" (UID: \"71258a00-ac83-4668-b057-9eef513ff832\") " Apr 16 16:53:26.928597 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:26.928575 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71258a00-ac83-4668-b057-9eef513ff832-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "71258a00-ac83-4668-b057-9eef513ff832" (UID: "71258a00-ac83-4668-b057-9eef513ff832"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:53:26.930436 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:26.930412 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71258a00-ac83-4668-b057-9eef513ff832-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "71258a00-ac83-4668-b057-9eef513ff832" (UID: "71258a00-ac83-4668-b057-9eef513ff832"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:53:27.029190 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.029153 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71258a00-ac83-4668-b057-9eef513ff832-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:53:27.029190 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.029183 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71258a00-ac83-4668-b057-9eef513ff832-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:53:27.522012 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.521980 2572 generic.go:358] "Generic (PLEG): container finished" podID="71258a00-ac83-4668-b057-9eef513ff832" containerID="ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5" exitCode=0 Apr 16 16:53:27.522298 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.522042 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" Apr 16 16:53:27.522298 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.522069 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" event={"ID":"71258a00-ac83-4668-b057-9eef513ff832","Type":"ContainerDied","Data":"ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5"} Apr 16 16:53:27.522298 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.522109 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss" event={"ID":"71258a00-ac83-4668-b057-9eef513ff832","Type":"ContainerDied","Data":"4c61b7c34727aa1b4ed156e2835f48f3e9fcc97d84cebe4bfd5894d15b1176b4"} Apr 16 16:53:27.522298 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.522125 2572 scope.go:117] "RemoveContainer" containerID="ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5" Apr 16 16:53:27.530605 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.530584 2572 scope.go:117] "RemoveContainer" containerID="ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5" Apr 16 16:53:27.530855 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:53:27.530834 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5\": container with ID starting with ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5 not found: ID does not exist" containerID="ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5" Apr 16 16:53:27.530906 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.530865 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5"} err="failed to get container status \"ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5\": rpc error: code = NotFound desc = could not find container \"ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5\": container with ID starting with ac70685209741b8c38663b240a25cbc65a4e5dd930c1b4682026fe58d8f1fad5 not found: ID does not exist" Apr 16 16:53:27.545499 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.545469 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss"] Apr 16 16:53:27.549418 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.549394 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-df178-67f9fc49fb-mccss"] Apr 16 16:53:27.909909 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:27.909833 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71258a00-ac83-4668-b057-9eef513ff832" path="/var/lib/kubelet/pods/71258a00-ac83-4668-b057-9eef513ff832/volumes" Apr 16 16:53:28.421112 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:28.421072 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 16:53:31.538410 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.538380 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd"] Apr 16 16:53:31.538784 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.538633 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerName="sequence-graph-39f4a" containerID="cri-o://c1f9d5ffe4a980deb257ea50569ef912dfd4e0caa6a4e3b0f1da613e82722865" gracePeriod=30 Apr 16 16:53:31.660609 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.660580 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k"] Apr 16 16:53:31.660944 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.660932 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerName="kserve-container" Apr 16 16:53:31.660996 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.660946 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerName="kserve-container" Apr 16 16:53:31.660996 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.660955 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="71258a00-ac83-4668-b057-9eef513ff832" containerName="switch-graph-df178" Apr 16 16:53:31.660996 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.660962 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="71258a00-ac83-4668-b057-9eef513ff832" containerName="switch-graph-df178" Apr 16 16:53:31.661089 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.661023 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="71258a00-ac83-4668-b057-9eef513ff832" containerName="switch-graph-df178" Apr 16 16:53:31.661089 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.661032 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="f100a932-f599-4d75-b479-7e63d1d0dcbc" containerName="kserve-container" Apr 16 16:53:31.663954 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.663931 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" Apr 16 16:53:31.673992 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.673623 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k"] Apr 16 16:53:31.677537 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.677514 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" Apr 16 16:53:31.698046 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.698017 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84"] Apr 16 16:53:31.698369 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.698318 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerName="kserve-container" containerID="cri-o://1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134" gracePeriod=30 Apr 16 16:53:31.821217 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:31.821189 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k"] Apr 16 16:53:31.823899 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:53:31.823853 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c8e9f31_7fa7_4a4b_a3b5_01a3fe1d19b9.slice/crio-fbc8ca4b06b665231e170570cca1b98c9f2779dc282ba72c7152b5e42d03852c WatchSource:0}: Error finding container fbc8ca4b06b665231e170570cca1b98c9f2779dc282ba72c7152b5e42d03852c: Status 404 returned error can't find the container with id fbc8ca4b06b665231e170570cca1b98c9f2779dc282ba72c7152b5e42d03852c Apr 16 16:53:32.541689 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:32.541648 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" event={"ID":"9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9","Type":"ContainerStarted","Data":"c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155"} Apr 16 16:53:32.541689 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:32.541690 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" event={"ID":"9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9","Type":"ContainerStarted","Data":"fbc8ca4b06b665231e170570cca1b98c9f2779dc282ba72c7152b5e42d03852c"} Apr 16 16:53:32.542107 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:32.541856 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" Apr 16 16:53:32.542932 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:32.542910 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 16 16:53:32.577165 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:32.577119 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" podStartSLOduration=1.577105141 podStartE2EDuration="1.577105141s" podCreationTimestamp="2026-04-16 16:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:53:32.575708491 +0000 UTC m=+1769.354555393" watchObservedRunningTime="2026-04-16 16:53:32.577105141 +0000 UTC m=+1769.355952031" Apr 16 16:53:33.545300 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:33.545235 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 16 16:53:33.889945 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:33.889858 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerName="sequence-graph-39f4a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:35.045347 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.045322 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" Apr 16 16:53:35.552268 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.552218 2572 generic.go:358] "Generic (PLEG): container finished" podID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerID="1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134" exitCode=0 Apr 16 16:53:35.552441 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.552288 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" event={"ID":"aca8fe33-d187-45b7-b520-fcde24ee2234","Type":"ContainerDied","Data":"1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134"} Apr 16 16:53:35.552441 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.552336 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" event={"ID":"aca8fe33-d187-45b7-b520-fcde24ee2234","Type":"ContainerDied","Data":"985d3c3ff6d53d27ed6b544ca506a3c33c1a0a122449391ac53fb267c52d4844"} Apr 16 16:53:35.552441 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.552351 2572 scope.go:117] "RemoveContainer" containerID="1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134" Apr 16 16:53:35.552441 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.552299 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84" Apr 16 16:53:35.560379 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.560362 2572 scope.go:117] "RemoveContainer" containerID="1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134" Apr 16 16:53:35.560595 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:53:35.560579 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134\": container with ID starting with 1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134 not found: ID does not exist" containerID="1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134" Apr 16 16:53:35.560638 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.560605 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134"} err="failed to get container status \"1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134\": rpc error: code = NotFound desc = could not find container \"1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134\": container with ID starting with 1dfa221a078ca5c65ba0b91cb0c1450a6fda7c2260aff0c9ebb57beb6876d134 not found: ID does not exist" Apr 16 16:53:35.576335 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.576302 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84"] Apr 16 16:53:35.577847 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.577825 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-39f4a-predictor-6b65487685-plh84"] Apr 16 16:53:35.908979 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:35.908903 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" path="/var/lib/kubelet/pods/aca8fe33-d187-45b7-b520-fcde24ee2234/volumes" Apr 16 16:53:38.420331 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:38.420293 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 16:53:38.890369 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:38.890326 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerName="sequence-graph-39f4a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:43.545966 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:43.545919 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 16 16:53:43.890537 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:43.890437 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerName="sequence-graph-39f4a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:43.890705 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:43.890557 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:53:48.422131 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:48.422099 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" Apr 16 16:53:48.890124 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:48.890087 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerName="sequence-graph-39f4a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:53.546360 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:53.546269 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 16 16:53:53.890890 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:53.890807 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerName="sequence-graph-39f4a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:53:58.889829 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:53:58.889784 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerName="sequence-graph-39f4a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:54:01.643739 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:01.643706 2572 generic.go:358] "Generic (PLEG): container finished" podID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerID="c1f9d5ffe4a980deb257ea50569ef912dfd4e0caa6a4e3b0f1da613e82722865" exitCode=0 Apr 16 16:54:01.644188 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:01.643754 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" event={"ID":"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52","Type":"ContainerDied","Data":"c1f9d5ffe4a980deb257ea50569ef912dfd4e0caa6a4e3b0f1da613e82722865"} Apr 16 16:54:01.688363 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:01.688340 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:54:01.797634 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:01.797552 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-proxy-tls\") pod \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\" (UID: \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\") " Apr 16 16:54:01.797634 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:01.797594 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-openshift-service-ca-bundle\") pod \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\" (UID: \"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52\") " Apr 16 16:54:01.797967 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:01.797945 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" (UID: "cc5b1b60-9892-47e3-b0b6-e0f3f8918f52"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:54:01.799669 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:01.799648 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" (UID: "cc5b1b60-9892-47e3-b0b6-e0f3f8918f52"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:54:01.898777 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:01.898740 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:54:01.898777 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:01.898770 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:54:02.649795 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:02.649757 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" event={"ID":"cc5b1b60-9892-47e3-b0b6-e0f3f8918f52","Type":"ContainerDied","Data":"7a840c8bc3ddf534ea2627e3a1132e623dd316ad1a714d7b1166d3c799adf495"} Apr 16 16:54:02.649795 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:02.649794 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd" Apr 16 16:54:02.650303 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:02.649800 2572 scope.go:117] "RemoveContainer" containerID="c1f9d5ffe4a980deb257ea50569ef912dfd4e0caa6a4e3b0f1da613e82722865" Apr 16 16:54:02.666729 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:02.666703 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd"] Apr 16 16:54:02.670128 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:02.670099 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-39f4a-89d49ccf6-5mbjd"] Apr 16 16:54:03.546330 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:03.546285 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 16 16:54:03.911482 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:03.911403 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" path="/var/lib/kubelet/pods/cc5b1b60-9892-47e3-b0b6-e0f3f8918f52/volumes" Apr 16 16:54:06.843046 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.842961 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p"] Apr 16 16:54:06.843412 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.843338 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerName="kserve-container" Apr 16 16:54:06.843412 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.843350 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerName="kserve-container" Apr 16 16:54:06.843412 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.843361 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerName="sequence-graph-39f4a" Apr 16 16:54:06.843412 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.843367 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerName="sequence-graph-39f4a" Apr 16 16:54:06.843550 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.843425 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="aca8fe33-d187-45b7-b520-fcde24ee2234" containerName="kserve-container" Apr 16 16:54:06.843550 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.843438 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="cc5b1b60-9892-47e3-b0b6-e0f3f8918f52" containerName="sequence-graph-39f4a" Apr 16 16:54:06.847865 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.847842 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:06.850484 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.850454 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-bab33-serving-cert\"" Apr 16 16:54:06.850608 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.850454 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-bab33-kube-rbac-proxy-sar-config\"" Apr 16 16:54:06.850608 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.850466 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 16:54:06.856215 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.856193 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p"] Apr 16 16:54:06.947631 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.947593 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-openshift-service-ca-bundle\") pod \"ensemble-graph-bab33-85475f65fb-t945p\" (UID: \"7a7d8f44-59d2-49fb-bdcf-81d13e311d36\") " pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:06.947832 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:06.947660 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-proxy-tls\") pod \"ensemble-graph-bab33-85475f65fb-t945p\" (UID: \"7a7d8f44-59d2-49fb-bdcf-81d13e311d36\") " pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:07.048688 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:07.048649 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-openshift-service-ca-bundle\") pod \"ensemble-graph-bab33-85475f65fb-t945p\" (UID: \"7a7d8f44-59d2-49fb-bdcf-81d13e311d36\") " pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:07.048890 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:07.048705 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-proxy-tls\") pod \"ensemble-graph-bab33-85475f65fb-t945p\" (UID: \"7a7d8f44-59d2-49fb-bdcf-81d13e311d36\") " pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:07.049332 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:07.049309 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-openshift-service-ca-bundle\") pod \"ensemble-graph-bab33-85475f65fb-t945p\" (UID: \"7a7d8f44-59d2-49fb-bdcf-81d13e311d36\") " pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:07.051127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:07.051105 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-proxy-tls\") pod \"ensemble-graph-bab33-85475f65fb-t945p\" (UID: \"7a7d8f44-59d2-49fb-bdcf-81d13e311d36\") " pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:07.158936 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:07.158832 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:07.283590 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:07.283561 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p"] Apr 16 16:54:07.286153 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:54:07.286125 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a7d8f44_59d2_49fb_bdcf_81d13e311d36.slice/crio-8c074a2b76f6f1bd30633c15515a172cc11c3ae287a51169fc96595fab744c02 WatchSource:0}: Error finding container 8c074a2b76f6f1bd30633c15515a172cc11c3ae287a51169fc96595fab744c02: Status 404 returned error can't find the container with id 8c074a2b76f6f1bd30633c15515a172cc11c3ae287a51169fc96595fab744c02 Apr 16 16:54:07.670111 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:07.670072 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" event={"ID":"7a7d8f44-59d2-49fb-bdcf-81d13e311d36","Type":"ContainerStarted","Data":"919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8"} Apr 16 16:54:07.670111 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:07.670112 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" event={"ID":"7a7d8f44-59d2-49fb-bdcf-81d13e311d36","Type":"ContainerStarted","Data":"8c074a2b76f6f1bd30633c15515a172cc11c3ae287a51169fc96595fab744c02"} Apr 16 16:54:07.670375 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:07.670200 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:07.688296 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:07.688234 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" podStartSLOduration=1.688220878 podStartE2EDuration="1.688220878s" podCreationTimestamp="2026-04-16 16:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:54:07.68664751 +0000 UTC m=+1804.465494402" watchObservedRunningTime="2026-04-16 16:54:07.688220878 +0000 UTC m=+1804.467067768" Apr 16 16:54:13.545760 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:13.545713 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 16 16:54:13.680285 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:13.680231 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:16.965397 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:16.965360 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p"] Apr 16 16:54:16.965872 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:16.965561 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerName="ensemble-graph-bab33" containerID="cri-o://919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8" gracePeriod=30 Apr 16 16:54:17.163046 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.163013 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql"] Apr 16 16:54:17.163313 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.163291 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" containerID="cri-o://3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609" gracePeriod=30 Apr 16 16:54:17.211051 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.211016 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7"] Apr 16 16:54:17.214376 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.214353 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" Apr 16 16:54:17.223844 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.223823 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" Apr 16 16:54:17.231674 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.231635 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7"] Apr 16 16:54:17.373870 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.373827 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7"] Apr 16 16:54:17.377753 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:54:17.377723 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6ad7b17_f0cb_4a25_a648_a2c17c4b1205.slice/crio-0dbe7de85e0b1d39fc998c95a5b4261b8d30cf6d5bbac0bcb123c7606148bf99 WatchSource:0}: Error finding container 0dbe7de85e0b1d39fc998c95a5b4261b8d30cf6d5bbac0bcb123c7606148bf99: Status 404 returned error can't find the container with id 0dbe7de85e0b1d39fc998c95a5b4261b8d30cf6d5bbac0bcb123c7606148bf99 Apr 16 16:54:17.704568 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.704534 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" event={"ID":"c6ad7b17-f0cb-4a25-a648-a2c17c4b1205","Type":"ContainerStarted","Data":"e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792"} Apr 16 16:54:17.704568 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.704570 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" event={"ID":"c6ad7b17-f0cb-4a25-a648-a2c17c4b1205","Type":"ContainerStarted","Data":"0dbe7de85e0b1d39fc998c95a5b4261b8d30cf6d5bbac0bcb123c7606148bf99"} Apr 16 16:54:17.704805 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.704742 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" Apr 16 16:54:17.706165 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.706137 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 16:54:17.721663 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:17.721566 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" podStartSLOduration=0.72154366 podStartE2EDuration="721.54366ms" podCreationTimestamp="2026-04-16 16:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:54:17.720733578 +0000 UTC m=+1814.499580469" watchObservedRunningTime="2026-04-16 16:54:17.72154366 +0000 UTC m=+1814.500390552" Apr 16 16:54:18.420286 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:18.420225 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.34:8080: connect: connection refused" Apr 16 16:54:18.678552 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:18.678461 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerName="ensemble-graph-bab33" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:54:18.708680 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:18.708641 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 16:54:20.518384 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:20.518359 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" Apr 16 16:54:20.715893 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:20.715856 2572 generic.go:358] "Generic (PLEG): container finished" podID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerID="3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609" exitCode=0 Apr 16 16:54:20.716079 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:20.715926 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" Apr 16 16:54:20.716079 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:20.715942 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" event={"ID":"7bf148ef-251a-42b6-ad49-1165aab7abd9","Type":"ContainerDied","Data":"3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609"} Apr 16 16:54:20.716079 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:20.715986 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql" event={"ID":"7bf148ef-251a-42b6-ad49-1165aab7abd9","Type":"ContainerDied","Data":"d17ddfaa7e7c6cc814b26f9452f4917081d243d96644a469051a21204d9e8913"} Apr 16 16:54:20.716079 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:20.716008 2572 scope.go:117] "RemoveContainer" containerID="3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609" Apr 16 16:54:20.724591 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:20.724571 2572 scope.go:117] "RemoveContainer" containerID="3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609" Apr 16 16:54:20.724855 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:54:20.724836 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609\": container with ID starting with 3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609 not found: ID does not exist" containerID="3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609" Apr 16 16:54:20.724908 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:20.724864 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609"} err="failed to get container status \"3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609\": rpc error: code = NotFound desc = could not find container \"3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609\": container with ID starting with 3fb4d451f3c73ca90294532271f30d1073a39ac8a500d50b7180f1b296bf0609 not found: ID does not exist" Apr 16 16:54:20.742016 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:20.741974 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql"] Apr 16 16:54:20.745924 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:20.745895 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-bab33-predictor-556678b97c-f2kql"] Apr 16 16:54:21.909198 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:21.909170 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" path="/var/lib/kubelet/pods/7bf148ef-251a-42b6-ad49-1165aab7abd9/volumes" Apr 16 16:54:23.546442 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:23.546408 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" Apr 16 16:54:23.678644 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:23.678606 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerName="ensemble-graph-bab33" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:54:28.678534 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:28.678488 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerName="ensemble-graph-bab33" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:54:28.678956 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:28.678597 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:28.709551 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:28.709507 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 16:54:33.678127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:33.678079 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerName="ensemble-graph-bab33" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:54:38.678317 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:38.678279 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerName="ensemble-graph-bab33" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:54:38.709515 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:38.709470 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 16:54:41.699132 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.699093 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf"] Apr 16 16:54:41.699567 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.699499 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" Apr 16 16:54:41.699567 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.699513 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" Apr 16 16:54:41.699646 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.699595 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="7bf148ef-251a-42b6-ad49-1165aab7abd9" containerName="kserve-container" Apr 16 16:54:41.702537 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.702517 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:54:41.705067 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.705044 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-04ea1-serving-cert\"" Apr 16 16:54:41.705198 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.705149 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-04ea1-kube-rbac-proxy-sar-config\"" Apr 16 16:54:41.711961 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.711935 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf"] Apr 16 16:54:41.764368 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.764324 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37d34243-13cb-4b01-ae05-97a2c9e61d31-proxy-tls\") pod \"sequence-graph-04ea1-5fbb7b7bc6-nwwbf\" (UID: \"37d34243-13cb-4b01-ae05-97a2c9e61d31\") " pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:54:41.764559 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.764376 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d34243-13cb-4b01-ae05-97a2c9e61d31-openshift-service-ca-bundle\") pod \"sequence-graph-04ea1-5fbb7b7bc6-nwwbf\" (UID: \"37d34243-13cb-4b01-ae05-97a2c9e61d31\") " pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:54:41.865724 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.865690 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37d34243-13cb-4b01-ae05-97a2c9e61d31-proxy-tls\") pod \"sequence-graph-04ea1-5fbb7b7bc6-nwwbf\" (UID: \"37d34243-13cb-4b01-ae05-97a2c9e61d31\") " pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:54:41.865906 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.865752 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d34243-13cb-4b01-ae05-97a2c9e61d31-openshift-service-ca-bundle\") pod \"sequence-graph-04ea1-5fbb7b7bc6-nwwbf\" (UID: \"37d34243-13cb-4b01-ae05-97a2c9e61d31\") " pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:54:41.866366 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.866348 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d34243-13cb-4b01-ae05-97a2c9e61d31-openshift-service-ca-bundle\") pod \"sequence-graph-04ea1-5fbb7b7bc6-nwwbf\" (UID: \"37d34243-13cb-4b01-ae05-97a2c9e61d31\") " pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:54:41.868104 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:41.868084 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37d34243-13cb-4b01-ae05-97a2c9e61d31-proxy-tls\") pod \"sequence-graph-04ea1-5fbb7b7bc6-nwwbf\" (UID: \"37d34243-13cb-4b01-ae05-97a2c9e61d31\") " pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:54:42.013473 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:42.013376 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:54:42.144032 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:42.143873 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf"] Apr 16 16:54:42.146773 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:54:42.146744 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37d34243_13cb_4b01_ae05_97a2c9e61d31.slice/crio-b4730583082319a1229a453dcd52e787c7f35c356f9a31b0e4c46a82cf053a1b WatchSource:0}: Error finding container b4730583082319a1229a453dcd52e787c7f35c356f9a31b0e4c46a82cf053a1b: Status 404 returned error can't find the container with id b4730583082319a1229a453dcd52e787c7f35c356f9a31b0e4c46a82cf053a1b Apr 16 16:54:42.790886 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:42.790849 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" event={"ID":"37d34243-13cb-4b01-ae05-97a2c9e61d31","Type":"ContainerStarted","Data":"f7e09468ece5cd9a99b29b62ce71df6cab9b9ccd8993148701abc74f0b92df1e"} Apr 16 16:54:42.790886 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:42.790889 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" event={"ID":"37d34243-13cb-4b01-ae05-97a2c9e61d31","Type":"ContainerStarted","Data":"b4730583082319a1229a453dcd52e787c7f35c356f9a31b0e4c46a82cf053a1b"} Apr 16 16:54:42.791304 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:42.790915 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:54:42.809715 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:42.809654 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" podStartSLOduration=1.809636973 podStartE2EDuration="1.809636973s" podCreationTimestamp="2026-04-16 16:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:54:42.807875706 +0000 UTC m=+1839.586722588" watchObservedRunningTime="2026-04-16 16:54:42.809636973 +0000 UTC m=+1839.588483863" Apr 16 16:54:43.678206 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:43.678164 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerName="ensemble-graph-bab33" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:54:47.105439 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.105408 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:47.212635 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.212602 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-openshift-service-ca-bundle\") pod \"7a7d8f44-59d2-49fb-bdcf-81d13e311d36\" (UID: \"7a7d8f44-59d2-49fb-bdcf-81d13e311d36\") " Apr 16 16:54:47.212844 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.212695 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-proxy-tls\") pod \"7a7d8f44-59d2-49fb-bdcf-81d13e311d36\" (UID: \"7a7d8f44-59d2-49fb-bdcf-81d13e311d36\") " Apr 16 16:54:47.212976 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.212951 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "7a7d8f44-59d2-49fb-bdcf-81d13e311d36" (UID: "7a7d8f44-59d2-49fb-bdcf-81d13e311d36"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:54:47.214812 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.214785 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "7a7d8f44-59d2-49fb-bdcf-81d13e311d36" (UID: "7a7d8f44-59d2-49fb-bdcf-81d13e311d36"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:54:47.313554 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.313518 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:54:47.313554 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.313548 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a7d8f44-59d2-49fb-bdcf-81d13e311d36-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:54:47.807584 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.807548 2572 generic.go:358] "Generic (PLEG): container finished" podID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerID="919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8" exitCode=0 Apr 16 16:54:47.807864 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.807610 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" Apr 16 16:54:47.807864 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.807631 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" event={"ID":"7a7d8f44-59d2-49fb-bdcf-81d13e311d36","Type":"ContainerDied","Data":"919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8"} Apr 16 16:54:47.807864 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.807670 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p" event={"ID":"7a7d8f44-59d2-49fb-bdcf-81d13e311d36","Type":"ContainerDied","Data":"8c074a2b76f6f1bd30633c15515a172cc11c3ae287a51169fc96595fab744c02"} Apr 16 16:54:47.807864 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.807688 2572 scope.go:117] "RemoveContainer" containerID="919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8" Apr 16 16:54:47.815864 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.815845 2572 scope.go:117] "RemoveContainer" containerID="919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8" Apr 16 16:54:47.816169 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:54:47.816148 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8\": container with ID starting with 919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8 not found: ID does not exist" containerID="919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8" Apr 16 16:54:47.816280 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.816178 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8"} err="failed to get container status \"919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8\": rpc error: code = NotFound desc = could not find container \"919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8\": container with ID starting with 919002a08632871f4c246047232ef75924a45beef3e15e54fdad963baef3d1f8 not found: ID does not exist" Apr 16 16:54:47.830824 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.830793 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p"] Apr 16 16:54:47.837038 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.837007 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-bab33-85475f65fb-t945p"] Apr 16 16:54:47.910038 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:47.909996 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" path="/var/lib/kubelet/pods/7a7d8f44-59d2-49fb-bdcf-81d13e311d36/volumes" Apr 16 16:54:48.709511 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:48.709454 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 16:54:48.799915 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:48.799881 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:54:51.777977 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.777942 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf"] Apr 16 16:54:51.778459 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.778199 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerName="sequence-graph-04ea1" containerID="cri-o://f7e09468ece5cd9a99b29b62ce71df6cab9b9ccd8993148701abc74f0b92df1e" gracePeriod=30 Apr 16 16:54:51.888355 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.888315 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k"] Apr 16 16:54:51.888645 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.888619 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" containerID="cri-o://c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155" gracePeriod=30 Apr 16 16:54:51.916305 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.916264 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp"] Apr 16 16:54:51.916666 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.916650 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerName="ensemble-graph-bab33" Apr 16 16:54:51.916721 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.916670 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerName="ensemble-graph-bab33" Apr 16 16:54:51.916757 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.916739 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="7a7d8f44-59d2-49fb-bdcf-81d13e311d36" containerName="ensemble-graph-bab33" Apr 16 16:54:51.921164 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.921141 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" Apr 16 16:54:51.933533 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.933504 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" Apr 16 16:54:51.939984 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:51.939953 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp"] Apr 16 16:54:52.082962 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:52.082929 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp"] Apr 16 16:54:52.084448 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:54:52.084411 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7366816_b46c_4799_89c3_aecd3144302d.slice/crio-2088604dbee720ec40e98bf91ea25859fdb22eb8ab48922ac1fec9365ac8fb9c WatchSource:0}: Error finding container 2088604dbee720ec40e98bf91ea25859fdb22eb8ab48922ac1fec9365ac8fb9c: Status 404 returned error can't find the container with id 2088604dbee720ec40e98bf91ea25859fdb22eb8ab48922ac1fec9365ac8fb9c Apr 16 16:54:52.826392 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:52.826351 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" event={"ID":"b7366816-b46c-4799-89c3-aecd3144302d","Type":"ContainerStarted","Data":"b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7"} Apr 16 16:54:52.826392 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:52.826394 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" event={"ID":"b7366816-b46c-4799-89c3-aecd3144302d","Type":"ContainerStarted","Data":"2088604dbee720ec40e98bf91ea25859fdb22eb8ab48922ac1fec9365ac8fb9c"} Apr 16 16:54:52.826949 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:52.826583 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" Apr 16 16:54:52.828228 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:52.828177 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" podUID="b7366816-b46c-4799-89c3-aecd3144302d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 16:54:52.859585 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:52.859536 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" podStartSLOduration=1.8595201719999999 podStartE2EDuration="1.859520172s" podCreationTimestamp="2026-04-16 16:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:54:52.856559827 +0000 UTC m=+1849.635406732" watchObservedRunningTime="2026-04-16 16:54:52.859520172 +0000 UTC m=+1849.638367089" Apr 16 16:54:53.546354 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:53.546305 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.35:8080: connect: connection refused" Apr 16 16:54:53.799484 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:53.799380 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerName="sequence-graph-04ea1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:54:53.831271 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:53.831211 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" podUID="b7366816-b46c-4799-89c3-aecd3144302d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 16:54:55.234105 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.234079 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" Apr 16 16:54:55.839492 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.839457 2572 generic.go:358] "Generic (PLEG): container finished" podID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerID="c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155" exitCode=0 Apr 16 16:54:55.839697 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.839502 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" event={"ID":"9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9","Type":"ContainerDied","Data":"c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155"} Apr 16 16:54:55.839697 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.839545 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" event={"ID":"9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9","Type":"ContainerDied","Data":"fbc8ca4b06b665231e170570cca1b98c9f2779dc282ba72c7152b5e42d03852c"} Apr 16 16:54:55.839697 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.839561 2572 scope.go:117] "RemoveContainer" containerID="c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155" Apr 16 16:54:55.839697 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.839523 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k" Apr 16 16:54:55.847775 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.847754 2572 scope.go:117] "RemoveContainer" containerID="c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155" Apr 16 16:54:55.848070 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:54:55.848049 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155\": container with ID starting with c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155 not found: ID does not exist" containerID="c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155" Apr 16 16:54:55.848127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.848079 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155"} err="failed to get container status \"c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155\": rpc error: code = NotFound desc = could not find container \"c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155\": container with ID starting with c33120e4c17f31dd293f85852942bbabccebd46671b70b957c5aaf911dc20155 not found: ID does not exist" Apr 16 16:54:55.861385 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.861364 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k"] Apr 16 16:54:55.865517 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.865496 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04ea1-predictor-6fcb9b4f57-4p49k"] Apr 16 16:54:55.909714 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:55.909689 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" path="/var/lib/kubelet/pods/9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9/volumes" Apr 16 16:54:58.709174 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:58.709125 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.37:8080: connect: connection refused" Apr 16 16:54:58.798030 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:54:58.797994 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerName="sequence-graph-04ea1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:55:03.798553 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:03.798512 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerName="sequence-graph-04ea1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:55:03.798949 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:03.798611 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:55:03.831910 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:03.831867 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" podUID="b7366816-b46c-4799-89c3-aecd3144302d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 16:55:08.710676 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:08.710637 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" Apr 16 16:55:08.798207 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:08.798167 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerName="sequence-graph-04ea1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:55:13.798238 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:13.798195 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerName="sequence-graph-04ea1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:55:13.831510 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:13.831461 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" podUID="b7366816-b46c-4799-89c3-aecd3144302d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 16:55:18.797980 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:18.797936 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerName="sequence-graph-04ea1" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 16:55:21.813452 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:55:21.813414 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37d34243_13cb_4b01_ae05_97a2c9e61d31.slice/crio-conmon-f7e09468ece5cd9a99b29b62ce71df6cab9b9ccd8993148701abc74f0b92df1e.scope\": RecentStats: unable to find data in memory cache]" Apr 16 16:55:21.813824 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:55:21.813557 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37d34243_13cb_4b01_ae05_97a2c9e61d31.slice/crio-conmon-f7e09468ece5cd9a99b29b62ce71df6cab9b9ccd8993148701abc74f0b92df1e.scope\": RecentStats: unable to find data in memory cache]" Apr 16 16:55:21.928720 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:21.928688 2572 generic.go:358] "Generic (PLEG): container finished" podID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerID="f7e09468ece5cd9a99b29b62ce71df6cab9b9ccd8993148701abc74f0b92df1e" exitCode=0 Apr 16 16:55:21.928873 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:21.928765 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" event={"ID":"37d34243-13cb-4b01-ae05-97a2c9e61d31","Type":"ContainerDied","Data":"f7e09468ece5cd9a99b29b62ce71df6cab9b9ccd8993148701abc74f0b92df1e"} Apr 16 16:55:21.928873 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:21.928810 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" event={"ID":"37d34243-13cb-4b01-ae05-97a2c9e61d31","Type":"ContainerDied","Data":"b4730583082319a1229a453dcd52e787c7f35c356f9a31b0e4c46a82cf053a1b"} Apr 16 16:55:21.928873 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:21.928824 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4730583082319a1229a453dcd52e787c7f35c356f9a31b0e4c46a82cf053a1b" Apr 16 16:55:21.936874 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:21.936852 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:55:22.022274 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:22.022222 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37d34243-13cb-4b01-ae05-97a2c9e61d31-proxy-tls\") pod \"37d34243-13cb-4b01-ae05-97a2c9e61d31\" (UID: \"37d34243-13cb-4b01-ae05-97a2c9e61d31\") " Apr 16 16:55:22.022475 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:22.022299 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d34243-13cb-4b01-ae05-97a2c9e61d31-openshift-service-ca-bundle\") pod \"37d34243-13cb-4b01-ae05-97a2c9e61d31\" (UID: \"37d34243-13cb-4b01-ae05-97a2c9e61d31\") " Apr 16 16:55:22.022646 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:22.022623 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d34243-13cb-4b01-ae05-97a2c9e61d31-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "37d34243-13cb-4b01-ae05-97a2c9e61d31" (UID: "37d34243-13cb-4b01-ae05-97a2c9e61d31"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 16:55:22.024303 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:22.024280 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d34243-13cb-4b01-ae05-97a2c9e61d31-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "37d34243-13cb-4b01-ae05-97a2c9e61d31" (UID: "37d34243-13cb-4b01-ae05-97a2c9e61d31"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 16:55:22.123091 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:22.123010 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37d34243-13cb-4b01-ae05-97a2c9e61d31-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:55:22.123091 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:22.123040 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d34243-13cb-4b01-ae05-97a2c9e61d31-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 16:55:22.932163 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:22.932123 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf" Apr 16 16:55:22.953887 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:22.953851 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf"] Apr 16 16:55:22.957548 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:22.957519 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-04ea1-5fbb7b7bc6-nwwbf"] Apr 16 16:55:23.831881 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:23.831839 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" podUID="b7366816-b46c-4799-89c3-aecd3144302d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 16:55:23.909496 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:23.909469 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" path="/var/lib/kubelet/pods/37d34243-13cb-4b01-ae05-97a2c9e61d31/volumes" Apr 16 16:55:27.192427 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.192383 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692"] Apr 16 16:55:27.192928 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.192907 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerName="sequence-graph-04ea1" Apr 16 16:55:27.193008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.192933 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerName="sequence-graph-04ea1" Apr 16 16:55:27.193008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.192948 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" Apr 16 16:55:27.193008 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.192957 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" Apr 16 16:55:27.193150 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.193062 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="9c8e9f31-7fa7-4a4b-a3b5-01a3fe1d19b9" containerName="kserve-container" Apr 16 16:55:27.193150 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.193078 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="37d34243-13cb-4b01-ae05-97a2c9e61d31" containerName="sequence-graph-04ea1" Apr 16 16:55:27.197213 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.197192 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 16:55:27.199955 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.199938 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-da571-kube-rbac-proxy-sar-config\"" Apr 16 16:55:27.200297 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.200273 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-da571-serving-cert\"" Apr 16 16:55:27.200407 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.200299 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 16:55:27.203127 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.203106 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692"] Apr 16 16:55:27.270486 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.270446 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-openshift-service-ca-bundle\") pod \"ensemble-graph-da571-57db577d67-bk692\" (UID: \"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70\") " pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 16:55:27.270651 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.270501 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-proxy-tls\") pod \"ensemble-graph-da571-57db577d67-bk692\" (UID: \"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70\") " pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 16:55:27.371468 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.371426 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-openshift-service-ca-bundle\") pod \"ensemble-graph-da571-57db577d67-bk692\" (UID: \"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70\") " pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 16:55:27.371468 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.371472 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-proxy-tls\") pod \"ensemble-graph-da571-57db577d67-bk692\" (UID: \"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70\") " pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 16:55:27.372057 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.372035 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-openshift-service-ca-bundle\") pod \"ensemble-graph-da571-57db577d67-bk692\" (UID: \"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70\") " pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 16:55:27.373967 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.373948 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-proxy-tls\") pod \"ensemble-graph-da571-57db577d67-bk692\" (UID: \"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70\") " pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 16:55:27.508176 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.508087 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 16:55:27.631219 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.631193 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692"] Apr 16 16:55:27.634020 ip-10-0-128-173 kubenswrapper[2572]: W0416 16:55:27.633790 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4ec0da9_f3f0_435c_8c18_5f09f47b3b70.slice/crio-6d35aa4256c233a7048cd2ac7026a6f49be91cde7b86a0e895e17b5132fe6816 WatchSource:0}: Error finding container 6d35aa4256c233a7048cd2ac7026a6f49be91cde7b86a0e895e17b5132fe6816: Status 404 returned error can't find the container with id 6d35aa4256c233a7048cd2ac7026a6f49be91cde7b86a0e895e17b5132fe6816 Apr 16 16:55:27.949775 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.949735 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" event={"ID":"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70","Type":"ContainerStarted","Data":"11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416"} Apr 16 16:55:27.949775 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.949776 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" event={"ID":"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70","Type":"ContainerStarted","Data":"6d35aa4256c233a7048cd2ac7026a6f49be91cde7b86a0e895e17b5132fe6816"} Apr 16 16:55:27.949986 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.949857 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 16:55:27.968553 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:27.968505 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" podStartSLOduration=0.968491471 podStartE2EDuration="968.491471ms" podCreationTimestamp="2026-04-16 16:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:55:27.967161757 +0000 UTC m=+1884.746008648" watchObservedRunningTime="2026-04-16 16:55:27.968491471 +0000 UTC m=+1884.747338378" Apr 16 16:55:33.831415 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:33.831372 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" podUID="b7366816-b46c-4799-89c3-aecd3144302d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.39:8080: connect: connection refused" Apr 16 16:55:33.959174 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:33.959141 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 16:55:43.832109 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:55:43.832071 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" Apr 16 16:56:02.080366 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.080329 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26"] Apr 16 16:56:02.083791 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.083776 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 16:56:02.090394 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.090373 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-64f8d-serving-cert\"" Apr 16 16:56:02.090495 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.090375 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-64f8d-kube-rbac-proxy-sar-config\"" Apr 16 16:56:02.098788 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.098736 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26"] Apr 16 16:56:02.166213 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.166177 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8b9877e-9a55-4f2c-92c8-af577bba4326-proxy-tls\") pod \"sequence-graph-64f8d-7d567c889c-8np26\" (UID: \"b8b9877e-9a55-4f2c-92c8-af577bba4326\") " pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 16:56:02.166413 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.166266 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8b9877e-9a55-4f2c-92c8-af577bba4326-openshift-service-ca-bundle\") pod \"sequence-graph-64f8d-7d567c889c-8np26\" (UID: \"b8b9877e-9a55-4f2c-92c8-af577bba4326\") " pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 16:56:02.267713 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.267671 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8b9877e-9a55-4f2c-92c8-af577bba4326-proxy-tls\") pod \"sequence-graph-64f8d-7d567c889c-8np26\" (UID: \"b8b9877e-9a55-4f2c-92c8-af577bba4326\") " pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 16:56:02.267908 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.267775 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8b9877e-9a55-4f2c-92c8-af577bba4326-openshift-service-ca-bundle\") pod \"sequence-graph-64f8d-7d567c889c-8np26\" (UID: \"b8b9877e-9a55-4f2c-92c8-af577bba4326\") " pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 16:56:02.267908 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:56:02.267826 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/sequence-graph-64f8d-serving-cert: secret "sequence-graph-64f8d-serving-cert" not found Apr 16 16:56:02.267908 ip-10-0-128-173 kubenswrapper[2572]: E0416 16:56:02.267903 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8b9877e-9a55-4f2c-92c8-af577bba4326-proxy-tls podName:b8b9877e-9a55-4f2c-92c8-af577bba4326 nodeName:}" failed. No retries permitted until 2026-04-16 16:56:02.76788582 +0000 UTC m=+1919.546732693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/b8b9877e-9a55-4f2c-92c8-af577bba4326-proxy-tls") pod "sequence-graph-64f8d-7d567c889c-8np26" (UID: "b8b9877e-9a55-4f2c-92c8-af577bba4326") : secret "sequence-graph-64f8d-serving-cert" not found Apr 16 16:56:02.268461 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.268442 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8b9877e-9a55-4f2c-92c8-af577bba4326-openshift-service-ca-bundle\") pod \"sequence-graph-64f8d-7d567c889c-8np26\" (UID: \"b8b9877e-9a55-4f2c-92c8-af577bba4326\") " pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 16:56:02.772948 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.772909 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8b9877e-9a55-4f2c-92c8-af577bba4326-proxy-tls\") pod \"sequence-graph-64f8d-7d567c889c-8np26\" (UID: \"b8b9877e-9a55-4f2c-92c8-af577bba4326\") " pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 16:56:02.775373 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.775339 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8b9877e-9a55-4f2c-92c8-af577bba4326-proxy-tls\") pod \"sequence-graph-64f8d-7d567c889c-8np26\" (UID: \"b8b9877e-9a55-4f2c-92c8-af577bba4326\") " pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 16:56:02.993462 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:02.993410 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 16:56:03.123399 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:03.120793 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26"] Apr 16 16:56:04.073030 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:04.072992 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" event={"ID":"b8b9877e-9a55-4f2c-92c8-af577bba4326","Type":"ContainerStarted","Data":"b016e4a84b480d8ae37c19595119877c49ce6d6e9f113ca435b543ad24a8a38c"} Apr 16 16:56:04.073030 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:04.073032 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" event={"ID":"b8b9877e-9a55-4f2c-92c8-af577bba4326","Type":"ContainerStarted","Data":"5cc4ff4d4980c7c42ac2bdf25f71a063bf7acb0f84b5af2478620385dab4a875"} Apr 16 16:56:04.073301 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:04.073070 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 16:56:04.091469 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:04.091402 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" podStartSLOduration=2.091387062 podStartE2EDuration="2.091387062s" podCreationTimestamp="2026-04-16 16:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 16:56:04.089794103 +0000 UTC m=+1920.868640993" watchObservedRunningTime="2026-04-16 16:56:04.091387062 +0000 UTC m=+1920.870233953" Apr 16 16:56:10.081286 ip-10-0-128-173 kubenswrapper[2572]: I0416 16:56:10.081238 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 17:01:04.033648 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:01:04.033616 2572 scope.go:117] "RemoveContainer" containerID="f7e09468ece5cd9a99b29b62ce71df6cab9b9ccd8993148701abc74f0b92df1e" Apr 16 17:03:41.833287 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:41.833235 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692"] Apr 16 17:03:41.835697 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:41.833497 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerName="ensemble-graph-da571" containerID="cri-o://11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416" gracePeriod=30 Apr 16 17:03:41.955266 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:41.955206 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7"] Apr 16 17:03:41.955540 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:41.955508 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerName="kserve-container" containerID="cri-o://e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792" gracePeriod=30 Apr 16 17:03:42.058431 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.058393 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8"] Apr 16 17:03:42.061812 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.061792 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" Apr 16 17:03:42.075289 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.075268 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" Apr 16 17:03:42.094505 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.094465 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8"] Apr 16 17:03:42.219034 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.218896 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8"] Apr 16 17:03:42.222080 ip-10-0-128-173 kubenswrapper[2572]: W0416 17:03:42.222051 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace6f6ff_6b2e_4aec_bde2_9007ab3736d0.slice/crio-c0444a2fd5820e6fdb74da80f2f14be75dc029943adff46228e86783433fe52e WatchSource:0}: Error finding container c0444a2fd5820e6fdb74da80f2f14be75dc029943adff46228e86783433fe52e: Status 404 returned error can't find the container with id c0444a2fd5820e6fdb74da80f2f14be75dc029943adff46228e86783433fe52e Apr 16 17:03:42.224236 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.224220 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 17:03:42.603348 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.603237 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" event={"ID":"ace6f6ff-6b2e-4aec-bde2-9007ab3736d0","Type":"ContainerStarted","Data":"043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169"} Apr 16 17:03:42.603348 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.603297 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" event={"ID":"ace6f6ff-6b2e-4aec-bde2-9007ab3736d0","Type":"ContainerStarted","Data":"c0444a2fd5820e6fdb74da80f2f14be75dc029943adff46228e86783433fe52e"} Apr 16 17:03:42.603595 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.603491 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" Apr 16 17:03:42.604897 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.604875 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 17:03:42.623125 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:42.623068 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" podStartSLOduration=0.623051662 podStartE2EDuration="623.051662ms" podCreationTimestamp="2026-04-16 17:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:03:42.621263552 +0000 UTC m=+2379.400110439" watchObservedRunningTime="2026-04-16 17:03:42.623051662 +0000 UTC m=+2379.401898550" Apr 16 17:03:43.607994 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:43.607958 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 17:03:43.957404 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:43.957368 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerName="ensemble-graph-da571" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:03:45.208438 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.208411 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" Apr 16 17:03:45.615962 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.615927 2572 generic.go:358] "Generic (PLEG): container finished" podID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerID="e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792" exitCode=0 Apr 16 17:03:45.616138 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.615996 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" Apr 16 17:03:45.616138 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.616019 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" event={"ID":"c6ad7b17-f0cb-4a25-a648-a2c17c4b1205","Type":"ContainerDied","Data":"e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792"} Apr 16 17:03:45.616138 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.616057 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7" event={"ID":"c6ad7b17-f0cb-4a25-a648-a2c17c4b1205","Type":"ContainerDied","Data":"0dbe7de85e0b1d39fc998c95a5b4261b8d30cf6d5bbac0bcb123c7606148bf99"} Apr 16 17:03:45.616138 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.616074 2572 scope.go:117] "RemoveContainer" containerID="e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792" Apr 16 17:03:45.624901 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.624884 2572 scope.go:117] "RemoveContainer" containerID="e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792" Apr 16 17:03:45.625164 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:03:45.625141 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792\": container with ID starting with e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792 not found: ID does not exist" containerID="e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792" Apr 16 17:03:45.625284 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.625170 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792"} err="failed to get container status \"e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792\": rpc error: code = NotFound desc = could not find container \"e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792\": container with ID starting with e98c8273459181596c471d88a5ad3876697b50fe84f96457f8c58905052b1792 not found: ID does not exist" Apr 16 17:03:45.637729 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.637698 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7"] Apr 16 17:03:45.645024 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.645001 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-da571-predictor-d95c9c8b7-2qvf7"] Apr 16 17:03:45.909988 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:45.909911 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" path="/var/lib/kubelet/pods/c6ad7b17-f0cb-4a25-a648-a2c17c4b1205/volumes" Apr 16 17:03:48.957535 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:48.957492 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerName="ensemble-graph-da571" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:03:53.608733 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:53.608688 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 17:03:53.956956 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:53.956917 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerName="ensemble-graph-da571" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:03:53.957127 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:53.957021 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 17:03:58.956775 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:03:58.956730 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerName="ensemble-graph-da571" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:04:03.608101 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:03.608043 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 17:04:03.957181 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:03.957142 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerName="ensemble-graph-da571" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:04:08.957112 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:08.957071 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerName="ensemble-graph-da571" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:04:11.977881 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:11.977858 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 17:04:11.984346 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:11.984321 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-proxy-tls\") pod \"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70\" (UID: \"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70\") " Apr 16 17:04:11.984491 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:11.984364 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-openshift-service-ca-bundle\") pod \"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70\" (UID: \"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70\") " Apr 16 17:04:11.984754 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:11.984729 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" (UID: "b4ec0da9-f3f0-435c-8c18-5f09f47b3b70"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:04:11.986404 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:11.986382 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" (UID: "b4ec0da9-f3f0-435c-8c18-5f09f47b3b70"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:04:12.084991 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.084940 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 17:04:12.084991 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.084980 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 17:04:12.705281 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.705226 2572 generic.go:358] "Generic (PLEG): container finished" podID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerID="11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416" exitCode=0 Apr 16 17:04:12.705461 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.705312 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" Apr 16 17:04:12.705461 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.705314 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" event={"ID":"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70","Type":"ContainerDied","Data":"11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416"} Apr 16 17:04:12.705461 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.705351 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692" event={"ID":"b4ec0da9-f3f0-435c-8c18-5f09f47b3b70","Type":"ContainerDied","Data":"6d35aa4256c233a7048cd2ac7026a6f49be91cde7b86a0e895e17b5132fe6816"} Apr 16 17:04:12.705461 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.705368 2572 scope.go:117] "RemoveContainer" containerID="11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416" Apr 16 17:04:12.716185 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.716157 2572 scope.go:117] "RemoveContainer" containerID="11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416" Apr 16 17:04:12.716766 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:04:12.716742 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416\": container with ID starting with 11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416 not found: ID does not exist" containerID="11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416" Apr 16 17:04:12.716843 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.716777 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416"} err="failed to get container status \"11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416\": rpc error: code = NotFound desc = could not find container \"11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416\": container with ID starting with 11f77e68f602a9c9ea64454b9a25c1c5369c5f97cb309967ce381810e6d2d416 not found: ID does not exist" Apr 16 17:04:12.731548 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.731515 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692"] Apr 16 17:04:12.733963 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:12.733926 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-da571-57db577d67-bk692"] Apr 16 17:04:13.608765 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:13.608718 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 17:04:13.910031 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:13.909952 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" path="/var/lib/kubelet/pods/b4ec0da9-f3f0-435c-8c18-5f09f47b3b70/volumes" Apr 16 17:04:16.782858 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.782823 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26"] Apr 16 17:04:16.783294 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.783126 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerName="sequence-graph-64f8d" containerID="cri-o://b016e4a84b480d8ae37c19595119877c49ce6d6e9f113ca435b543ad24a8a38c" gracePeriod=30 Apr 16 17:04:16.886147 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.886114 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp"] Apr 16 17:04:16.886425 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.886385 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" podUID="b7366816-b46c-4799-89c3-aecd3144302d" containerName="kserve-container" containerID="cri-o://b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7" gracePeriod=30 Apr 16 17:04:16.913763 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.913729 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd"] Apr 16 17:04:16.914111 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.914100 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerName="kserve-container" Apr 16 17:04:16.914155 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.914114 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerName="kserve-container" Apr 16 17:04:16.914155 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.914123 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerName="ensemble-graph-da571" Apr 16 17:04:16.914155 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.914129 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerName="ensemble-graph-da571" Apr 16 17:04:16.914264 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.914190 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="c6ad7b17-f0cb-4a25-a648-a2c17c4b1205" containerName="kserve-container" Apr 16 17:04:16.914264 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.914202 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="b4ec0da9-f3f0-435c-8c18-5f09f47b3b70" containerName="ensemble-graph-da571" Apr 16 17:04:16.918425 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.918403 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" Apr 16 17:04:16.924638 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.924610 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd"] Apr 16 17:04:16.929517 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:16.929488 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" Apr 16 17:04:17.068653 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:17.068626 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd"] Apr 16 17:04:17.070543 ip-10-0-128-173 kubenswrapper[2572]: W0416 17:04:17.070518 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92833bc1_eea4_4514_a009_9753726ecfd4.slice/crio-726534dfe21fc1f1f5fb9c596f3bbcddbdf414559e495292815776ff037dd04e WatchSource:0}: Error finding container 726534dfe21fc1f1f5fb9c596f3bbcddbdf414559e495292815776ff037dd04e: Status 404 returned error can't find the container with id 726534dfe21fc1f1f5fb9c596f3bbcddbdf414559e495292815776ff037dd04e Apr 16 17:04:17.725615 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:17.725581 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" event={"ID":"92833bc1-eea4-4514-a009-9753726ecfd4","Type":"ContainerStarted","Data":"721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638"} Apr 16 17:04:17.725615 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:17.725618 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" event={"ID":"92833bc1-eea4-4514-a009-9753726ecfd4","Type":"ContainerStarted","Data":"726534dfe21fc1f1f5fb9c596f3bbcddbdf414559e495292815776ff037dd04e"} Apr 16 17:04:17.725838 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:17.725753 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" Apr 16 17:04:17.727463 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:17.727423 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 17:04:17.741517 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:17.741456 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" podStartSLOduration=1.741439058 podStartE2EDuration="1.741439058s" podCreationTimestamp="2026-04-16 17:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:04:17.740655997 +0000 UTC m=+2414.519502890" watchObservedRunningTime="2026-04-16 17:04:17.741439058 +0000 UTC m=+2414.520285950" Apr 16 17:04:18.730834 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:18.730792 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 17:04:20.080623 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.080574 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerName="sequence-graph-64f8d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:04:20.239605 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.239577 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" Apr 16 17:04:20.738322 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.738287 2572 generic.go:358] "Generic (PLEG): container finished" podID="b7366816-b46c-4799-89c3-aecd3144302d" containerID="b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7" exitCode=0 Apr 16 17:04:20.738488 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.738349 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" Apr 16 17:04:20.738488 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.738369 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" event={"ID":"b7366816-b46c-4799-89c3-aecd3144302d","Type":"ContainerDied","Data":"b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7"} Apr 16 17:04:20.738488 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.738407 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp" event={"ID":"b7366816-b46c-4799-89c3-aecd3144302d","Type":"ContainerDied","Data":"2088604dbee720ec40e98bf91ea25859fdb22eb8ab48922ac1fec9365ac8fb9c"} Apr 16 17:04:20.738488 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.738425 2572 scope.go:117] "RemoveContainer" containerID="b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7" Apr 16 17:04:20.746859 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.746842 2572 scope.go:117] "RemoveContainer" containerID="b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7" Apr 16 17:04:20.747105 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:04:20.747086 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7\": container with ID starting with b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7 not found: ID does not exist" containerID="b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7" Apr 16 17:04:20.747151 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.747115 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7"} err="failed to get container status \"b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7\": rpc error: code = NotFound desc = could not find container \"b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7\": container with ID starting with b866da079a3cf05548fd366a76a6b141962164d7398cc0a263630076c032c6c7 not found: ID does not exist" Apr 16 17:04:20.758919 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.758894 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp"] Apr 16 17:04:20.766938 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:20.766911 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-64f8d-predictor-6bdf696c95-hz4cp"] Apr 16 17:04:21.909497 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:21.909422 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7366816-b46c-4799-89c3-aecd3144302d" path="/var/lib/kubelet/pods/b7366816-b46c-4799-89c3-aecd3144302d/volumes" Apr 16 17:04:23.609024 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:23.608980 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 17:04:25.079684 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:25.079641 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerName="sequence-graph-64f8d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:04:28.731140 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:28.731087 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 17:04:30.079983 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:30.079936 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerName="sequence-graph-64f8d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:04:30.080457 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:30.080066 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 17:04:33.609544 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:33.609509 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" Apr 16 17:04:35.080111 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:35.080068 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerName="sequence-graph-64f8d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:04:38.730946 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:38.730898 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 17:04:40.079965 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:40.079927 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerName="sequence-graph-64f8d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:04:45.079512 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:45.079472 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerName="sequence-graph-64f8d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:04:46.827686 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:46.827647 2572 generic.go:358] "Generic (PLEG): container finished" podID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerID="b016e4a84b480d8ae37c19595119877c49ce6d6e9f113ca435b543ad24a8a38c" exitCode=0 Apr 16 17:04:46.828067 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:46.827691 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" event={"ID":"b8b9877e-9a55-4f2c-92c8-af577bba4326","Type":"ContainerDied","Data":"b016e4a84b480d8ae37c19595119877c49ce6d6e9f113ca435b543ad24a8a38c"} Apr 16 17:04:46.940759 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:46.940732 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 17:04:47.091560 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.091463 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8b9877e-9a55-4f2c-92c8-af577bba4326-proxy-tls\") pod \"b8b9877e-9a55-4f2c-92c8-af577bba4326\" (UID: \"b8b9877e-9a55-4f2c-92c8-af577bba4326\") " Apr 16 17:04:47.091719 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.091565 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8b9877e-9a55-4f2c-92c8-af577bba4326-openshift-service-ca-bundle\") pod \"b8b9877e-9a55-4f2c-92c8-af577bba4326\" (UID: \"b8b9877e-9a55-4f2c-92c8-af577bba4326\") " Apr 16 17:04:47.091922 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.091898 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b9877e-9a55-4f2c-92c8-af577bba4326-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "b8b9877e-9a55-4f2c-92c8-af577bba4326" (UID: "b8b9877e-9a55-4f2c-92c8-af577bba4326"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:04:47.093646 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.093620 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b9877e-9a55-4f2c-92c8-af577bba4326-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b8b9877e-9a55-4f2c-92c8-af577bba4326" (UID: "b8b9877e-9a55-4f2c-92c8-af577bba4326"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:04:47.192193 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.192153 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8b9877e-9a55-4f2c-92c8-af577bba4326-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 17:04:47.192193 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.192185 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b8b9877e-9a55-4f2c-92c8-af577bba4326-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 17:04:47.832465 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.832431 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" event={"ID":"b8b9877e-9a55-4f2c-92c8-af577bba4326","Type":"ContainerDied","Data":"5cc4ff4d4980c7c42ac2bdf25f71a063bf7acb0f84b5af2478620385dab4a875"} Apr 16 17:04:47.832920 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.832478 2572 scope.go:117] "RemoveContainer" containerID="b016e4a84b480d8ae37c19595119877c49ce6d6e9f113ca435b543ad24a8a38c" Apr 16 17:04:47.832920 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.832448 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26" Apr 16 17:04:47.855170 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.855143 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26"] Apr 16 17:04:47.858402 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.858378 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-64f8d-7d567c889c-8np26"] Apr 16 17:04:47.909616 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:47.909591 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" path="/var/lib/kubelet/pods/b8b9877e-9a55-4f2c-92c8-af577bba4326/volumes" Apr 16 17:04:48.731231 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:48.731182 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 17:04:52.148312 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.148217 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr"] Apr 16 17:04:52.148672 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.148593 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerName="sequence-graph-64f8d" Apr 16 17:04:52.148672 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.148605 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerName="sequence-graph-64f8d" Apr 16 17:04:52.148672 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.148616 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b7366816-b46c-4799-89c3-aecd3144302d" containerName="kserve-container" Apr 16 17:04:52.148672 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.148622 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7366816-b46c-4799-89c3-aecd3144302d" containerName="kserve-container" Apr 16 17:04:52.148798 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.148678 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="b7366816-b46c-4799-89c3-aecd3144302d" containerName="kserve-container" Apr 16 17:04:52.148798 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.148689 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8b9877e-9a55-4f2c-92c8-af577bba4326" containerName="sequence-graph-64f8d" Apr 16 17:04:52.152957 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.152938 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:04:52.155809 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.155767 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-df533-kube-rbac-proxy-sar-config\"" Apr 16 17:04:52.155975 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.155868 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 17:04:52.156241 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.156220 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-df533-serving-cert\"" Apr 16 17:04:52.162762 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.162714 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr"] Apr 16 17:04:52.236093 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.236058 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d62cd1a9-7469-4121-aee9-c23889606c61-proxy-tls\") pod \"splitter-graph-df533-65fbb7d4f-xpzqr\" (UID: \"d62cd1a9-7469-4121-aee9-c23889606c61\") " pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:04:52.236093 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.236092 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d62cd1a9-7469-4121-aee9-c23889606c61-openshift-service-ca-bundle\") pod \"splitter-graph-df533-65fbb7d4f-xpzqr\" (UID: \"d62cd1a9-7469-4121-aee9-c23889606c61\") " pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:04:52.336633 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.336595 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d62cd1a9-7469-4121-aee9-c23889606c61-proxy-tls\") pod \"splitter-graph-df533-65fbb7d4f-xpzqr\" (UID: \"d62cd1a9-7469-4121-aee9-c23889606c61\") " pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:04:52.336814 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.336643 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d62cd1a9-7469-4121-aee9-c23889606c61-openshift-service-ca-bundle\") pod \"splitter-graph-df533-65fbb7d4f-xpzqr\" (UID: \"d62cd1a9-7469-4121-aee9-c23889606c61\") " pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:04:52.336814 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:04:52.336753 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/splitter-graph-df533-serving-cert: secret "splitter-graph-df533-serving-cert" not found Apr 16 17:04:52.336908 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:04:52.336831 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d62cd1a9-7469-4121-aee9-c23889606c61-proxy-tls podName:d62cd1a9-7469-4121-aee9-c23889606c61 nodeName:}" failed. No retries permitted until 2026-04-16 17:04:52.836815455 +0000 UTC m=+2449.615662323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d62cd1a9-7469-4121-aee9-c23889606c61-proxy-tls") pod "splitter-graph-df533-65fbb7d4f-xpzqr" (UID: "d62cd1a9-7469-4121-aee9-c23889606c61") : secret "splitter-graph-df533-serving-cert" not found Apr 16 17:04:52.337303 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.337283 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d62cd1a9-7469-4121-aee9-c23889606c61-openshift-service-ca-bundle\") pod \"splitter-graph-df533-65fbb7d4f-xpzqr\" (UID: \"d62cd1a9-7469-4121-aee9-c23889606c61\") " pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:04:52.842443 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.842403 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d62cd1a9-7469-4121-aee9-c23889606c61-proxy-tls\") pod \"splitter-graph-df533-65fbb7d4f-xpzqr\" (UID: \"d62cd1a9-7469-4121-aee9-c23889606c61\") " pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:04:52.844903 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:52.844882 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d62cd1a9-7469-4121-aee9-c23889606c61-proxy-tls\") pod \"splitter-graph-df533-65fbb7d4f-xpzqr\" (UID: \"d62cd1a9-7469-4121-aee9-c23889606c61\") " pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:04:53.063830 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:53.063788 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:04:53.192311 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:53.192279 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr"] Apr 16 17:04:53.194643 ip-10-0-128-173 kubenswrapper[2572]: W0416 17:04:53.194614 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-45b47be100a581d69e1c91ff001d8e227568d0d567fa2da8354b848b4739f591 WatchSource:0}: Error finding container 45b47be100a581d69e1c91ff001d8e227568d0d567fa2da8354b848b4739f591: Status 404 returned error can't find the container with id 45b47be100a581d69e1c91ff001d8e227568d0d567fa2da8354b848b4739f591 Apr 16 17:04:53.856863 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:53.856828 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" event={"ID":"d62cd1a9-7469-4121-aee9-c23889606c61","Type":"ContainerStarted","Data":"bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75"} Apr 16 17:04:53.856863 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:53.856866 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" event={"ID":"d62cd1a9-7469-4121-aee9-c23889606c61","Type":"ContainerStarted","Data":"45b47be100a581d69e1c91ff001d8e227568d0d567fa2da8354b848b4739f591"} Apr 16 17:04:53.857089 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:53.856942 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:04:53.876730 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:53.876677 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" podStartSLOduration=1.876661572 podStartE2EDuration="1.876661572s" podCreationTimestamp="2026-04-16 17:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:04:53.875315572 +0000 UTC m=+2450.654162464" watchObservedRunningTime="2026-04-16 17:04:53.876661572 +0000 UTC m=+2450.655508476" Apr 16 17:04:58.731102 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:58.731043 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 17:04:59.868389 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:04:59.868353 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:05:02.196454 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.196420 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr"] Apr 16 17:05:02.196816 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.196631 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" containerName="splitter-graph-df533" containerID="cri-o://bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75" gracePeriod=30 Apr 16 17:05:02.318427 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.318393 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8"] Apr 16 17:05:02.318671 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.318648 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" containerID="cri-o://043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169" gracePeriod=30 Apr 16 17:05:02.325327 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.325301 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4"] Apr 16 17:05:02.328935 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.328916 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" Apr 16 17:05:02.339384 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.339359 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4"] Apr 16 17:05:02.339503 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.339390 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" Apr 16 17:05:02.474679 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.474649 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4"] Apr 16 17:05:02.476912 ip-10-0-128-173 kubenswrapper[2572]: W0416 17:05:02.476879 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe0b3717_18f2_4c58_aeb9_b15a1e676563.slice/crio-934744dc0c120538a1fe6ea2d4c35f713fbac902b241c95d68cb33304c305d79 WatchSource:0}: Error finding container 934744dc0c120538a1fe6ea2d4c35f713fbac902b241c95d68cb33304c305d79: Status 404 returned error can't find the container with id 934744dc0c120538a1fe6ea2d4c35f713fbac902b241c95d68cb33304c305d79 Apr 16 17:05:02.888505 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.888411 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" event={"ID":"be0b3717-18f2-4c58-aeb9-b15a1e676563","Type":"ContainerStarted","Data":"d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d"} Apr 16 17:05:02.888505 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.888451 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" event={"ID":"be0b3717-18f2-4c58-aeb9-b15a1e676563","Type":"ContainerStarted","Data":"934744dc0c120538a1fe6ea2d4c35f713fbac902b241c95d68cb33304c305d79"} Apr 16 17:05:02.888707 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.888631 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" Apr 16 17:05:02.890131 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.890099 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 17:05:02.905356 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:02.905307 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" podStartSLOduration=0.905293769 podStartE2EDuration="905.293769ms" podCreationTimestamp="2026-04-16 17:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:05:02.904043702 +0000 UTC m=+2459.682890584" watchObservedRunningTime="2026-04-16 17:05:02.905293769 +0000 UTC m=+2459.684140638" Apr 16 17:05:03.608211 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:03.608168 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.42:8080: connect: connection refused" Apr 16 17:05:03.892177 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:03.892088 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 17:05:04.865891 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:04.865853 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" containerName="splitter-graph-df533" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:05:05.656562 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:05.656537 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" Apr 16 17:05:05.899866 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:05.899835 2572 generic.go:358] "Generic (PLEG): container finished" podID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerID="043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169" exitCode=0 Apr 16 17:05:05.900346 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:05.899895 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" Apr 16 17:05:05.900346 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:05.899920 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" event={"ID":"ace6f6ff-6b2e-4aec-bde2-9007ab3736d0","Type":"ContainerDied","Data":"043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169"} Apr 16 17:05:05.900346 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:05.899970 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8" event={"ID":"ace6f6ff-6b2e-4aec-bde2-9007ab3736d0","Type":"ContainerDied","Data":"c0444a2fd5820e6fdb74da80f2f14be75dc029943adff46228e86783433fe52e"} Apr 16 17:05:05.900346 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:05.899995 2572 scope.go:117] "RemoveContainer" containerID="043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169" Apr 16 17:05:05.908928 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:05.908908 2572 scope.go:117] "RemoveContainer" containerID="043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169" Apr 16 17:05:05.909213 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:05:05.909191 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169\": container with ID starting with 043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169 not found: ID does not exist" containerID="043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169" Apr 16 17:05:05.909304 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:05.909225 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169"} err="failed to get container status \"043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169\": rpc error: code = NotFound desc = could not find container \"043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169\": container with ID starting with 043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169 not found: ID does not exist" Apr 16 17:05:05.922741 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:05.922705 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8"] Apr 16 17:05:05.925487 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:05.925464 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df533-predictor-6bc8658849-wmfj8"] Apr 16 17:05:07.910552 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:07.910511 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" path="/var/lib/kubelet/pods/ace6f6ff-6b2e-4aec-bde2-9007ab3736d0/volumes" Apr 16 17:05:08.732685 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:08.732648 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" Apr 16 17:05:09.866039 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:09.866003 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" containerName="splitter-graph-df533" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:05:13.892873 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:13.892820 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 17:05:14.865658 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:14.865610 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" containerName="splitter-graph-df533" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:05:14.865867 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:14.865770 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:05:19.865471 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:19.865432 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" containerName="splitter-graph-df533" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:05:23.892549 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:23.892506 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 17:05:24.865121 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:24.865079 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" containerName="splitter-graph-df533" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:05:26.995053 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:26.995023 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn"] Apr 16 17:05:26.995455 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:26.995405 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" Apr 16 17:05:26.995455 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:26.995417 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" Apr 16 17:05:26.995542 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:26.995516 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="ace6f6ff-6b2e-4aec-bde2-9007ab3736d0" containerName="kserve-container" Apr 16 17:05:26.999057 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:26.999037 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:27.001600 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.001580 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-b9f91-kube-rbac-proxy-sar-config\"" Apr 16 17:05:27.001717 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.001585 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-b9f91-serving-cert\"" Apr 16 17:05:27.005502 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.005483 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn"] Apr 16 17:05:27.054988 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.054947 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d695fc0a-51a9-4ef8-91f6-db4665e6a315-proxy-tls\") pod \"switch-graph-b9f91-76696f6fcb-nw6hn\" (UID: \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\") " pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:27.055156 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.055046 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d695fc0a-51a9-4ef8-91f6-db4665e6a315-openshift-service-ca-bundle\") pod \"switch-graph-b9f91-76696f6fcb-nw6hn\" (UID: \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\") " pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:27.155837 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.155798 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d695fc0a-51a9-4ef8-91f6-db4665e6a315-openshift-service-ca-bundle\") pod \"switch-graph-b9f91-76696f6fcb-nw6hn\" (UID: \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\") " pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:27.156034 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.155874 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d695fc0a-51a9-4ef8-91f6-db4665e6a315-proxy-tls\") pod \"switch-graph-b9f91-76696f6fcb-nw6hn\" (UID: \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\") " pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:27.156034 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:05:27.155965 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-b9f91-serving-cert: secret "switch-graph-b9f91-serving-cert" not found Apr 16 17:05:27.156034 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:05:27.156028 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d695fc0a-51a9-4ef8-91f6-db4665e6a315-proxy-tls podName:d695fc0a-51a9-4ef8-91f6-db4665e6a315 nodeName:}" failed. No retries permitted until 2026-04-16 17:05:27.656009048 +0000 UTC m=+2484.434855916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d695fc0a-51a9-4ef8-91f6-db4665e6a315-proxy-tls") pod "switch-graph-b9f91-76696f6fcb-nw6hn" (UID: "d695fc0a-51a9-4ef8-91f6-db4665e6a315") : secret "switch-graph-b9f91-serving-cert" not found Apr 16 17:05:27.156558 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.156530 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d695fc0a-51a9-4ef8-91f6-db4665e6a315-openshift-service-ca-bundle\") pod \"switch-graph-b9f91-76696f6fcb-nw6hn\" (UID: \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\") " pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:27.660910 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.660866 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d695fc0a-51a9-4ef8-91f6-db4665e6a315-proxy-tls\") pod \"switch-graph-b9f91-76696f6fcb-nw6hn\" (UID: \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\") " pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:27.663522 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.663487 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d695fc0a-51a9-4ef8-91f6-db4665e6a315-proxy-tls\") pod \"switch-graph-b9f91-76696f6fcb-nw6hn\" (UID: \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\") " pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:27.909597 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:27.909567 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:28.030150 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:28.030125 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn"] Apr 16 17:05:28.977793 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:28.977761 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" event={"ID":"d695fc0a-51a9-4ef8-91f6-db4665e6a315","Type":"ContainerStarted","Data":"d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398"} Apr 16 17:05:28.977994 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:28.977801 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" event={"ID":"d695fc0a-51a9-4ef8-91f6-db4665e6a315","Type":"ContainerStarted","Data":"2f4e56d06c78cdb20c8593bef3afd5fcc3bbc6d21a7c6cd9603966704edd2fb6"} Apr 16 17:05:28.977994 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:28.977888 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:28.994621 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:28.994564 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" podStartSLOduration=2.9945445939999997 podStartE2EDuration="2.994544594s" podCreationTimestamp="2026-04-16 17:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:05:28.992448587 +0000 UTC m=+2485.771295478" watchObservedRunningTime="2026-04-16 17:05:28.994544594 +0000 UTC m=+2485.773391486" Apr 16 17:05:29.865450 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:29.865411 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" containerName="splitter-graph-df533" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:05:32.217910 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:05:32.217865 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-conmon-bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75.scope\": RecentStats: unable to find data in memory cache]" Apr 16 17:05:32.218320 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:05:32.217861 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace6f6ff_6b2e_4aec_bde2_9007ab3736d0.slice/crio-conmon-043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-conmon-bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace6f6ff_6b2e_4aec_bde2_9007ab3736d0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace6f6ff_6b2e_4aec_bde2_9007ab3736d0.slice/crio-043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace6f6ff_6b2e_4aec_bde2_9007ab3736d0.slice/crio-c0444a2fd5820e6fdb74da80f2f14be75dc029943adff46228e86783433fe52e\": RecentStats: unable to find data in memory cache]" Apr 16 17:05:32.218320 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:05:32.217982 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-conmon-bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75.scope\": RecentStats: unable to find data in memory cache]" Apr 16 17:05:32.218320 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:05:32.218074 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace6f6ff_6b2e_4aec_bde2_9007ab3736d0.slice/crio-043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace6f6ff_6b2e_4aec_bde2_9007ab3736d0.slice/crio-c0444a2fd5820e6fdb74da80f2f14be75dc029943adff46228e86783433fe52e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace6f6ff_6b2e_4aec_bde2_9007ab3736d0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace6f6ff_6b2e_4aec_bde2_9007ab3736d0.slice/crio-conmon-043826b4ef7c4e04279389a2f70c824173000f5ecb2368d01187d3674db6d169.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-conmon-bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75.scope\": RecentStats: unable to find data in memory cache]" Apr 16 17:05:32.218528 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:05:32.218392 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62cd1a9_7469_4121_aee9_c23889606c61.slice/crio-conmon-bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75.scope\": RecentStats: unable to find data in memory cache]" Apr 16 17:05:32.352507 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.352485 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:05:32.502152 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.502064 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d62cd1a9-7469-4121-aee9-c23889606c61-proxy-tls\") pod \"d62cd1a9-7469-4121-aee9-c23889606c61\" (UID: \"d62cd1a9-7469-4121-aee9-c23889606c61\") " Apr 16 17:05:32.502152 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.502103 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d62cd1a9-7469-4121-aee9-c23889606c61-openshift-service-ca-bundle\") pod \"d62cd1a9-7469-4121-aee9-c23889606c61\" (UID: \"d62cd1a9-7469-4121-aee9-c23889606c61\") " Apr 16 17:05:32.502521 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.502494 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d62cd1a9-7469-4121-aee9-c23889606c61-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "d62cd1a9-7469-4121-aee9-c23889606c61" (UID: "d62cd1a9-7469-4121-aee9-c23889606c61"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:05:32.504169 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.504144 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d62cd1a9-7469-4121-aee9-c23889606c61-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d62cd1a9-7469-4121-aee9-c23889606c61" (UID: "d62cd1a9-7469-4121-aee9-c23889606c61"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:05:32.603479 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.603436 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d62cd1a9-7469-4121-aee9-c23889606c61-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 17:05:32.603479 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.603475 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d62cd1a9-7469-4121-aee9-c23889606c61-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 17:05:32.991553 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.991519 2572 generic.go:358] "Generic (PLEG): container finished" podID="d62cd1a9-7469-4121-aee9-c23889606c61" containerID="bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75" exitCode=0 Apr 16 17:05:32.991723 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.991581 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" Apr 16 17:05:32.991723 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.991602 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" event={"ID":"d62cd1a9-7469-4121-aee9-c23889606c61","Type":"ContainerDied","Data":"bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75"} Apr 16 17:05:32.991723 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.991639 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr" event={"ID":"d62cd1a9-7469-4121-aee9-c23889606c61","Type":"ContainerDied","Data":"45b47be100a581d69e1c91ff001d8e227568d0d567fa2da8354b848b4739f591"} Apr 16 17:05:32.991723 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:32.991655 2572 scope.go:117] "RemoveContainer" containerID="bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75" Apr 16 17:05:33.000378 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:33.000355 2572 scope.go:117] "RemoveContainer" containerID="bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75" Apr 16 17:05:33.000650 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:05:33.000629 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75\": container with ID starting with bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75 not found: ID does not exist" containerID="bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75" Apr 16 17:05:33.000700 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:33.000663 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75"} err="failed to get container status \"bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75\": rpc error: code = NotFound desc = could not find container \"bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75\": container with ID starting with bf53207a5cffb54f0fd0f2e61a71de7c26fb2624071e753402564cd886f7cf75 not found: ID does not exist" Apr 16 17:05:33.014204 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:33.014175 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr"] Apr 16 17:05:33.018538 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:33.018517 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-df533-65fbb7d4f-xpzqr"] Apr 16 17:05:33.893028 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:33.892986 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 17:05:33.909656 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:33.909628 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" path="/var/lib/kubelet/pods/d62cd1a9-7469-4121-aee9-c23889606c61/volumes" Apr 16 17:05:34.986801 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:34.986770 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:05:43.892356 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:43.892309 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.45:8080: connect: connection refused" Apr 16 17:05:53.894149 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:05:53.894111 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" Apr 16 17:06:12.413427 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.413382 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97"] Apr 16 17:06:12.413946 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.413926 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" containerName="splitter-graph-df533" Apr 16 17:06:12.414022 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.413950 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" containerName="splitter-graph-df533" Apr 16 17:06:12.414076 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.414056 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="d62cd1a9-7469-4121-aee9-c23889606c61" containerName="splitter-graph-df533" Apr 16 17:06:12.416892 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.416869 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:06:12.419382 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.419363 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-45e11-kube-rbac-proxy-sar-config\"" Apr 16 17:06:12.419496 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.419481 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-45e11-serving-cert\"" Apr 16 17:06:12.423698 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.423672 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97"] Apr 16 17:06:12.540231 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.540195 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-proxy-tls\") pod \"splitter-graph-45e11-8475c5d8c4-ngt97\" (UID: \"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80\") " pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:06:12.540231 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.540257 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-openshift-service-ca-bundle\") pod \"splitter-graph-45e11-8475c5d8c4-ngt97\" (UID: \"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80\") " pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:06:12.641569 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.641530 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-proxy-tls\") pod \"splitter-graph-45e11-8475c5d8c4-ngt97\" (UID: \"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80\") " pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:06:12.641744 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.641582 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-openshift-service-ca-bundle\") pod \"splitter-graph-45e11-8475c5d8c4-ngt97\" (UID: \"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80\") " pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:06:12.642295 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.642276 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-openshift-service-ca-bundle\") pod \"splitter-graph-45e11-8475c5d8c4-ngt97\" (UID: \"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80\") " pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:06:12.644003 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.643986 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-proxy-tls\") pod \"splitter-graph-45e11-8475c5d8c4-ngt97\" (UID: \"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80\") " pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:06:12.728572 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:12.728537 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:06:13.056906 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:13.056817 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97"] Apr 16 17:06:13.060319 ip-10-0-128-173 kubenswrapper[2572]: W0416 17:06:13.060290 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ca2f9d4_0e4e_4a07_814a_13eb9ca82d80.slice/crio-d5a89ff7397715ae8bc293c16f34c0ee50367fc34843d0f730bed2fd7950c050 WatchSource:0}: Error finding container d5a89ff7397715ae8bc293c16f34c0ee50367fc34843d0f730bed2fd7950c050: Status 404 returned error can't find the container with id d5a89ff7397715ae8bc293c16f34c0ee50367fc34843d0f730bed2fd7950c050 Apr 16 17:06:13.128053 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:13.128019 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" event={"ID":"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80","Type":"ContainerStarted","Data":"db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671"} Apr 16 17:06:13.128217 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:13.128061 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" event={"ID":"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80","Type":"ContainerStarted","Data":"d5a89ff7397715ae8bc293c16f34c0ee50367fc34843d0f730bed2fd7950c050"} Apr 16 17:06:13.128217 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:13.128135 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:06:13.144839 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:13.144778 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" podStartSLOduration=1.144758962 podStartE2EDuration="1.144758962s" podCreationTimestamp="2026-04-16 17:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:06:13.143870709 +0000 UTC m=+2529.922717602" watchObservedRunningTime="2026-04-16 17:06:13.144758962 +0000 UTC m=+2529.923605853" Apr 16 17:06:19.139635 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:06:19.139606 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:14:27.002521 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:27.002480 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97"] Apr 16 17:14:27.004849 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:27.002760 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerName="splitter-graph-45e11" containerID="cri-o://db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671" gracePeriod=30 Apr 16 17:14:27.092145 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:27.092111 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4"] Apr 16 17:14:27.092427 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:27.092400 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerName="kserve-container" containerID="cri-o://d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d" gracePeriod=30 Apr 16 17:14:29.138583 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:29.138536 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerName="splitter-graph-45e11" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:14:30.335290 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:30.335239 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" Apr 16 17:14:30.805155 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:30.805122 2572 generic.go:358] "Generic (PLEG): container finished" podID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerID="d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d" exitCode=0 Apr 16 17:14:30.805368 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:30.805185 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" Apr 16 17:14:30.805368 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:30.805210 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" event={"ID":"be0b3717-18f2-4c58-aeb9-b15a1e676563","Type":"ContainerDied","Data":"d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d"} Apr 16 17:14:30.805368 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:30.805258 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4" event={"ID":"be0b3717-18f2-4c58-aeb9-b15a1e676563","Type":"ContainerDied","Data":"934744dc0c120538a1fe6ea2d4c35f713fbac902b241c95d68cb33304c305d79"} Apr 16 17:14:30.805368 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:30.805277 2572 scope.go:117] "RemoveContainer" containerID="d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d" Apr 16 17:14:30.813272 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:30.813232 2572 scope.go:117] "RemoveContainer" containerID="d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d" Apr 16 17:14:30.813519 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:14:30.813496 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d\": container with ID starting with d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d not found: ID does not exist" containerID="d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d" Apr 16 17:14:30.813594 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:30.813527 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d"} err="failed to get container status \"d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d\": rpc error: code = NotFound desc = could not find container \"d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d\": container with ID starting with d55ea6aeb4269640f5b32535333670ba6000181639b21edf1b42c13daa3f541d not found: ID does not exist" Apr 16 17:14:30.825181 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:30.825154 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4"] Apr 16 17:14:30.827122 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:30.827102 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-45e11-predictor-686d55cc79-j4qw4"] Apr 16 17:14:31.910308 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:31.910277 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" path="/var/lib/kubelet/pods/be0b3717-18f2-4c58-aeb9-b15a1e676563/volumes" Apr 16 17:14:34.138315 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:34.138281 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerName="splitter-graph-45e11" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:14:39.137903 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:39.137864 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerName="splitter-graph-45e11" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:14:39.138394 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:39.137996 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:14:44.137909 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:44.137864 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerName="splitter-graph-45e11" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:14:49.138538 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:49.138491 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerName="splitter-graph-45e11" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:14:54.138545 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:54.138456 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerName="splitter-graph-45e11" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:14:57.175834 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.175811 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:14:57.240535 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.240499 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-openshift-service-ca-bundle\") pod \"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80\" (UID: \"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80\") " Apr 16 17:14:57.240535 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.240536 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-proxy-tls\") pod \"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80\" (UID: \"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80\") " Apr 16 17:14:57.240865 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.240841 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" (UID: "7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:14:57.242626 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.242607 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" (UID: "7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:14:57.341377 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.341289 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 17:14:57.341377 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.341323 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 17:14:57.900317 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.900283 2572 generic.go:358] "Generic (PLEG): container finished" podID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerID="db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671" exitCode=137 Apr 16 17:14:57.900494 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.900344 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" Apr 16 17:14:57.900494 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.900364 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" event={"ID":"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80","Type":"ContainerDied","Data":"db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671"} Apr 16 17:14:57.900494 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.900403 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97" event={"ID":"7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80","Type":"ContainerDied","Data":"d5a89ff7397715ae8bc293c16f34c0ee50367fc34843d0f730bed2fd7950c050"} Apr 16 17:14:57.900494 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.900423 2572 scope.go:117] "RemoveContainer" containerID="db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671" Apr 16 17:14:57.908737 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.908620 2572 scope.go:117] "RemoveContainer" containerID="db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671" Apr 16 17:14:57.908915 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:14:57.908876 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671\": container with ID starting with db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671 not found: ID does not exist" containerID="db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671" Apr 16 17:14:57.909030 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.908914 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671"} err="failed to get container status \"db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671\": rpc error: code = NotFound desc = could not find container \"db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671\": container with ID starting with db6c87f31b17208a503f3dc508d0ba6da9724573406646b85e6e10b170a25671 not found: ID does not exist" Apr 16 17:14:57.922480 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.922446 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97"] Apr 16 17:14:57.927982 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:57.927957 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-45e11-8475c5d8c4-ngt97"] Apr 16 17:14:59.910137 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:14:59.910101 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" path="/var/lib/kubelet/pods/7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80/volumes" Apr 16 17:21:46.526011 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:46.525971 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn"] Apr 16 17:21:46.529284 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:46.526736 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerName="switch-graph-b9f91" containerID="cri-o://d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398" gracePeriod=30 Apr 16 17:21:46.633312 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:46.633240 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd"] Apr 16 17:21:46.633545 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:46.633506 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" containerID="cri-o://721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638" gracePeriod=30 Apr 16 17:21:48.731205 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:48.731154 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.133.0.43:8080: connect: connection refused" Apr 16 17:21:49.578286 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:49.578264 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" Apr 16 17:21:49.985138 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:49.985101 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerName="switch-graph-b9f91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:21:50.248369 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:50.248332 2572 generic.go:358] "Generic (PLEG): container finished" podID="92833bc1-eea4-4514-a009-9753726ecfd4" containerID="721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638" exitCode=0 Apr 16 17:21:50.248562 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:50.248379 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" event={"ID":"92833bc1-eea4-4514-a009-9753726ecfd4","Type":"ContainerDied","Data":"721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638"} Apr 16 17:21:50.248562 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:50.248407 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" event={"ID":"92833bc1-eea4-4514-a009-9753726ecfd4","Type":"ContainerDied","Data":"726534dfe21fc1f1f5fb9c596f3bbcddbdf414559e495292815776ff037dd04e"} Apr 16 17:21:50.248562 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:50.248427 2572 scope.go:117] "RemoveContainer" containerID="721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638" Apr 16 17:21:50.248562 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:50.248433 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd" Apr 16 17:21:50.256395 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:50.256373 2572 scope.go:117] "RemoveContainer" containerID="721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638" Apr 16 17:21:50.256640 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:21:50.256620 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638\": container with ID starting with 721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638 not found: ID does not exist" containerID="721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638" Apr 16 17:21:50.256706 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:50.256647 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638"} err="failed to get container status \"721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638\": rpc error: code = NotFound desc = could not find container \"721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638\": container with ID starting with 721d0d33508ba5e5554b9c82f753f714e10a283e3c7c7b3409410a659d5a4638 not found: ID does not exist" Apr 16 17:21:50.263826 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:50.263801 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd"] Apr 16 17:21:50.267506 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:50.267484 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b9f91-predictor-bf89fb6c7-rjfsd"] Apr 16 17:21:51.910037 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:51.910002 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" path="/var/lib/kubelet/pods/92833bc1-eea4-4514-a009-9753726ecfd4/volumes" Apr 16 17:21:54.984862 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:54.984823 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerName="switch-graph-b9f91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:21:59.985152 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:59.985112 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerName="switch-graph-b9f91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:21:59.985579 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:21:59.985234 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:22:02.242526 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:02.242492 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:02.998164 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:02.998102 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:03.721519 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:03.721492 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:04.448219 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:04.448188 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:04.987182 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:04.987147 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerName="switch-graph-b9f91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:22:05.186259 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:05.186213 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:05.907070 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:05.907028 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:06.642803 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:06.642771 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:07.353594 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:07.353565 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:08.072197 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:08.072165 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:08.841880 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:08.841843 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:09.601546 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:09.601514 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:09.985004 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:09.984964 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerName="switch-graph-b9f91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:22:10.357482 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:10.357399 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-b9f91-76696f6fcb-nw6hn_d695fc0a-51a9-4ef8-91f6-db4665e6a315/switch-graph-b9f91/0.log" Apr 16 17:22:14.985182 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:14.985140 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerName="switch-graph-b9f91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 17:22:15.245500 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:15.245405 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-ngxj6_c1e6d4a3-5bed-4a4a-982e-2bc52481870a/global-pull-secret-syncer/0.log" Apr 16 17:22:15.384489 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:15.384450 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-94pcx_e894fdf2-0ea2-43a3-a40d-03eca2359199/konnectivity-agent/0.log" Apr 16 17:22:15.442261 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:15.442208 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-128-173.ec2.internal_a454133692b7d59775381b8452362b38/haproxy/0.log" Apr 16 17:22:16.676538 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:16.676515 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:22:16.824848 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:16.824743 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d695fc0a-51a9-4ef8-91f6-db4665e6a315-openshift-service-ca-bundle\") pod \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\" (UID: \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\") " Apr 16 17:22:16.824848 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:16.824855 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d695fc0a-51a9-4ef8-91f6-db4665e6a315-proxy-tls\") pod \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\" (UID: \"d695fc0a-51a9-4ef8-91f6-db4665e6a315\") " Apr 16 17:22:16.825169 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:16.825147 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d695fc0a-51a9-4ef8-91f6-db4665e6a315-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "d695fc0a-51a9-4ef8-91f6-db4665e6a315" (UID: "d695fc0a-51a9-4ef8-91f6-db4665e6a315"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 17:22:16.826892 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:16.826869 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d695fc0a-51a9-4ef8-91f6-db4665e6a315-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d695fc0a-51a9-4ef8-91f6-db4665e6a315" (UID: "d695fc0a-51a9-4ef8-91f6-db4665e6a315"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 17:22:16.926440 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:16.926382 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d695fc0a-51a9-4ef8-91f6-db4665e6a315-openshift-service-ca-bundle\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 17:22:16.926440 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:16.926433 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d695fc0a-51a9-4ef8-91f6-db4665e6a315-proxy-tls\") on node \"ip-10-0-128-173.ec2.internal\" DevicePath \"\"" Apr 16 17:22:17.340826 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:17.340789 2572 generic.go:358] "Generic (PLEG): container finished" podID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerID="d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398" exitCode=0 Apr 16 17:22:17.341077 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:17.340844 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" event={"ID":"d695fc0a-51a9-4ef8-91f6-db4665e6a315","Type":"ContainerDied","Data":"d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398"} Apr 16 17:22:17.341077 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:17.340872 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" event={"ID":"d695fc0a-51a9-4ef8-91f6-db4665e6a315","Type":"ContainerDied","Data":"2f4e56d06c78cdb20c8593bef3afd5fcc3bbc6d21a7c6cd9603966704edd2fb6"} Apr 16 17:22:17.341077 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:17.340880 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn" Apr 16 17:22:17.341077 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:17.340886 2572 scope.go:117] "RemoveContainer" containerID="d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398" Apr 16 17:22:17.349763 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:17.349739 2572 scope.go:117] "RemoveContainer" containerID="d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398" Apr 16 17:22:17.350037 ip-10-0-128-173 kubenswrapper[2572]: E0416 17:22:17.350017 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398\": container with ID starting with d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398 not found: ID does not exist" containerID="d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398" Apr 16 17:22:17.350102 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:17.350051 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398"} err="failed to get container status \"d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398\": rpc error: code = NotFound desc = could not find container \"d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398\": container with ID starting with d4467e470e5af3add24f81ecb30ee711e7a428c93aa624c3022574e876be2398 not found: ID does not exist" Apr 16 17:22:17.363080 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:17.363050 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn"] Apr 16 17:22:17.366655 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:17.366630 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-b9f91-76696f6fcb-nw6hn"] Apr 16 17:22:17.909698 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:17.909667 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" path="/var/lib/kubelet/pods/d695fc0a-51a9-4ef8-91f6-db4665e6a315/volumes" Apr 16 17:22:19.038701 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.038667 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_6b148346-fcbc-424a-9f33-775948eaf93c/alertmanager/0.log" Apr 16 17:22:19.060588 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.060563 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_6b148346-fcbc-424a-9f33-775948eaf93c/config-reloader/0.log" Apr 16 17:22:19.085070 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.085044 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_6b148346-fcbc-424a-9f33-775948eaf93c/kube-rbac-proxy-web/0.log" Apr 16 17:22:19.105619 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.105592 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_6b148346-fcbc-424a-9f33-775948eaf93c/kube-rbac-proxy/0.log" Apr 16 17:22:19.147726 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.147696 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_6b148346-fcbc-424a-9f33-775948eaf93c/kube-rbac-proxy-metric/0.log" Apr 16 17:22:19.171579 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.171552 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_6b148346-fcbc-424a-9f33-775948eaf93c/prom-label-proxy/0.log" Apr 16 17:22:19.194859 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.194832 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_6b148346-fcbc-424a-9f33-775948eaf93c/init-config-reloader/0.log" Apr 16 17:22:19.340068 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.339983 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-6b58844877-ctvc6_75c7d8fa-2cc7-4063-97e0-e8029c17e6f8/metrics-server/0.log" Apr 16 17:22:19.365340 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.365306 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-5876b4bbc7-jtfcf_1337a924-e282-4dfc-81f7-6bc7a3e4f272/monitoring-plugin/0.log" Apr 16 17:22:19.545452 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.545418 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-zkjg4_c646ba1a-2ec3-42be-ad8a-73615ad5d640/node-exporter/0.log" Apr 16 17:22:19.568305 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.568271 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-zkjg4_c646ba1a-2ec3-42be-ad8a-73615ad5d640/kube-rbac-proxy/0.log" Apr 16 17:22:19.587058 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.587034 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-zkjg4_c646ba1a-2ec3-42be-ad8a-73615ad5d640/init-textfile/0.log" Apr 16 17:22:19.900005 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.899973 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-9cb97cd87-sck65_2205f8f0-ea2d-4582-acb9-994bc6c9921b/prometheus-operator-admission-webhook/0.log" Apr 16 17:22:19.933188 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.933153 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7d7bd87b54-m7m26_f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50/telemeter-client/0.log" Apr 16 17:22:19.952525 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.952497 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7d7bd87b54-m7m26_f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50/reload/0.log" Apr 16 17:22:19.976123 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:19.976097 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7d7bd87b54-m7m26_f6c3ce1f-b585-4b92-aafe-f9b9a61f8a50/kube-rbac-proxy/0.log" Apr 16 17:22:20.005629 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:20.005600 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-84fb6c774c-m7wgx_0391a4f3-73de-49c9-9bec-43a34ad227ad/thanos-query/0.log" Apr 16 17:22:20.026373 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:20.026341 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-84fb6c774c-m7wgx_0391a4f3-73de-49c9-9bec-43a34ad227ad/kube-rbac-proxy-web/0.log" Apr 16 17:22:20.046642 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:20.046613 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-84fb6c774c-m7wgx_0391a4f3-73de-49c9-9bec-43a34ad227ad/kube-rbac-proxy/0.log" Apr 16 17:22:20.066100 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:20.066065 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-84fb6c774c-m7wgx_0391a4f3-73de-49c9-9bec-43a34ad227ad/prom-label-proxy/0.log" Apr 16 17:22:20.085446 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:20.085422 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-84fb6c774c-m7wgx_0391a4f3-73de-49c9-9bec-43a34ad227ad/kube-rbac-proxy-rules/0.log" Apr 16 17:22:20.104328 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:20.104304 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-84fb6c774c-m7wgx_0391a4f3-73de-49c9-9bec-43a34ad227ad/kube-rbac-proxy-metrics/0.log" Apr 16 17:22:22.034141 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.034112 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6468586f66-jgj22_68a222c3-09f6-4cfd-a36e-bb6380d02276/console/0.log" Apr 16 17:22:22.651765 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.651727 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm"] Apr 16 17:22:22.652131 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652113 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerName="switch-graph-b9f91" Apr 16 17:22:22.652303 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652135 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerName="switch-graph-b9f91" Apr 16 17:22:22.652303 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652146 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerName="splitter-graph-45e11" Apr 16 17:22:22.652303 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652154 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerName="splitter-graph-45e11" Apr 16 17:22:22.652303 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652178 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" Apr 16 17:22:22.652303 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652186 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" Apr 16 17:22:22.652303 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652200 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerName="kserve-container" Apr 16 17:22:22.652303 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652208 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerName="kserve-container" Apr 16 17:22:22.652678 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652310 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="be0b3717-18f2-4c58-aeb9-b15a1e676563" containerName="kserve-container" Apr 16 17:22:22.652678 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652327 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="92833bc1-eea4-4514-a009-9753726ecfd4" containerName="kserve-container" Apr 16 17:22:22.652678 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652338 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="7ca2f9d4-0e4e-4a07-814a-13eb9ca82d80" containerName="splitter-graph-45e11" Apr 16 17:22:22.652678 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.652349 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="d695fc0a-51a9-4ef8-91f6-db4665e6a315" containerName="switch-graph-b9f91" Apr 16 17:22:22.656884 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.656862 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.659447 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.659421 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-65l7w\"/\"default-dockercfg-srjc9\"" Apr 16 17:22:22.659557 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.659444 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-65l7w\"/\"openshift-service-ca.crt\"" Apr 16 17:22:22.660578 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.660561 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-65l7w\"/\"kube-root-ca.crt\"" Apr 16 17:22:22.664583 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.664564 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm"] Apr 16 17:22:22.775996 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.775956 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-proc\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.776200 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.776021 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-sys\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.776200 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.776062 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-lib-modules\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.776200 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.776089 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rxbx\" (UniqueName: \"kubernetes.io/projected/ca159478-b3ed-46ec-9a6a-34a342613085-kube-api-access-4rxbx\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.776371 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.776203 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-podres\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.877460 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.877426 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-sys\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.877614 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.877474 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-lib-modules\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.877614 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.877495 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4rxbx\" (UniqueName: \"kubernetes.io/projected/ca159478-b3ed-46ec-9a6a-34a342613085-kube-api-access-4rxbx\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.877614 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.877541 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-podres\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.877614 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.877560 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-proc\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.877614 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.877573 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-sys\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.877809 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.877620 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-lib-modules\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.877809 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.877648 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-proc\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.877809 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.877657 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/ca159478-b3ed-46ec-9a6a-34a342613085-podres\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.885754 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.885730 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rxbx\" (UniqueName: \"kubernetes.io/projected/ca159478-b3ed-46ec-9a6a-34a342613085-kube-api-access-4rxbx\") pod \"perf-node-gather-daemonset-ln5nm\" (UID: \"ca159478-b3ed-46ec-9a6a-34a342613085\") " pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:22.967485 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:22.967444 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:23.093324 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.093152 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm"] Apr 16 17:22:23.096133 ip-10-0-128-173 kubenswrapper[2572]: W0416 17:22:23.096108 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podca159478_b3ed_46ec_9a6a_34a342613085.slice/crio-9187f5fa31fd4d4fa3e1ce57e392a844bd7dfe7bea70a05e021d0a0ce118339a WatchSource:0}: Error finding container 9187f5fa31fd4d4fa3e1ce57e392a844bd7dfe7bea70a05e021d0a0ce118339a: Status 404 returned error can't find the container with id 9187f5fa31fd4d4fa3e1ce57e392a844bd7dfe7bea70a05e021d0a0ce118339a Apr 16 17:22:23.097734 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.097713 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 17:22:23.187020 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.186982 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z46sx_0def5104-1269-46d2-8b59-88b426ff3a84/dns/0.log" Apr 16 17:22:23.206830 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.206796 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z46sx_0def5104-1269-46d2-8b59-88b426ff3a84/kube-rbac-proxy/0.log" Apr 16 17:22:23.271797 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.271762 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-xl6x7_7b3cb7b5-891a-4226-b146-221600c3471c/dns-node-resolver/0.log" Apr 16 17:22:23.362858 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.362825 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" event={"ID":"ca159478-b3ed-46ec-9a6a-34a342613085","Type":"ContainerStarted","Data":"fe59623a9952c6d0935a0da08c35a6a86f52a1f6b6b15f7233f53e0473dd1631"} Apr 16 17:22:23.362858 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.362862 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" event={"ID":"ca159478-b3ed-46ec-9a6a-34a342613085","Type":"ContainerStarted","Data":"9187f5fa31fd4d4fa3e1ce57e392a844bd7dfe7bea70a05e021d0a0ce118339a"} Apr 16 17:22:23.363086 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.362896 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:23.378169 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.378113 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" podStartSLOduration=1.378096026 podStartE2EDuration="1.378096026s" podCreationTimestamp="2026-04-16 17:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 17:22:23.37729088 +0000 UTC m=+3500.156137772" watchObservedRunningTime="2026-04-16 17:22:23.378096026 +0000 UTC m=+3500.156942918" Apr 16 17:22:23.687979 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.687896 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-856dbdf944-r9ssw_dd857a2c-74f8-404b-8c76-8b15616fb405/registry/0.log" Apr 16 17:22:23.712276 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:23.712230 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-42w7g_387e0e6a-5ac4-4dc7-a1df-d6fae9770c8d/node-ca/0.log" Apr 16 17:22:24.771789 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:24.771759 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-czxkg_424c900c-278c-4d36-8442-3875c8baf989/serve-healthcheck-canary/0.log" Apr 16 17:22:25.286996 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:25.286965 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-zn7jk_486fb98c-ba32-4f4a-b74b-77594655b680/kube-rbac-proxy/0.log" Apr 16 17:22:25.305406 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:25.305373 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-zn7jk_486fb98c-ba32-4f4a-b74b-77594655b680/exporter/0.log" Apr 16 17:22:25.325261 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:25.325219 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-zn7jk_486fb98c-ba32-4f4a-b74b-77594655b680/extractor/0.log" Apr 16 17:22:27.252699 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:27.252662 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_kserve-controller-manager-55c74f6fbc-k7wrx_2f20de96-0f1b-4f35-ac71-b9339e308f30/manager/0.log" Apr 16 17:22:27.272605 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:27.272581 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_llmisvc-controller-manager-68cc5db7c4-kltmk_cd521945-1fbc-4b2c-917c-1b4c1e668517/manager/0.log" Apr 16 17:22:29.376921 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:29.376891 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-65l7w/perf-node-gather-daemonset-ln5nm" Apr 16 17:22:32.564913 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:32.564837 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gbkft_4d0d3535-86d0-4270-8087-38d613e5a0a5/kube-multus-additional-cni-plugins/0.log" Apr 16 17:22:32.588603 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:32.588532 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gbkft_4d0d3535-86d0-4270-8087-38d613e5a0a5/egress-router-binary-copy/0.log" Apr 16 17:22:32.608548 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:32.608507 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gbkft_4d0d3535-86d0-4270-8087-38d613e5a0a5/cni-plugins/0.log" Apr 16 17:22:32.629293 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:32.629260 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gbkft_4d0d3535-86d0-4270-8087-38d613e5a0a5/bond-cni-plugin/0.log" Apr 16 17:22:32.649198 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:32.649169 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gbkft_4d0d3535-86d0-4270-8087-38d613e5a0a5/routeoverride-cni/0.log" Apr 16 17:22:32.672202 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:32.672170 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gbkft_4d0d3535-86d0-4270-8087-38d613e5a0a5/whereabouts-cni-bincopy/0.log" Apr 16 17:22:32.707273 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:32.707220 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gbkft_4d0d3535-86d0-4270-8087-38d613e5a0a5/whereabouts-cni/0.log" Apr 16 17:22:33.181291 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:33.181254 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jzd86_a24802b0-e2f8-4e44-8234-6c63975e7440/kube-multus/0.log" Apr 16 17:22:33.293701 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:33.293550 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-29gd5_5ec99558-e99b-4661-be0c-b68d311f226a/network-metrics-daemon/0.log" Apr 16 17:22:33.337573 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:33.337544 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-29gd5_5ec99558-e99b-4661-be0c-b68d311f226a/kube-rbac-proxy/0.log" Apr 16 17:22:34.579396 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:34.579341 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csjjh_6f3381a4-23e0-42e8-b782-d6d4e6915910/ovn-controller/0.log" Apr 16 17:22:34.627238 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:34.627205 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csjjh_6f3381a4-23e0-42e8-b782-d6d4e6915910/ovn-acl-logging/0.log" Apr 16 17:22:34.645745 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:34.645712 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csjjh_6f3381a4-23e0-42e8-b782-d6d4e6915910/kube-rbac-proxy-node/0.log" Apr 16 17:22:34.665912 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:34.665879 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csjjh_6f3381a4-23e0-42e8-b782-d6d4e6915910/kube-rbac-proxy-ovn-metrics/0.log" Apr 16 17:22:34.683819 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:34.683786 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csjjh_6f3381a4-23e0-42e8-b782-d6d4e6915910/northd/0.log" Apr 16 17:22:34.702238 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:34.702213 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csjjh_6f3381a4-23e0-42e8-b782-d6d4e6915910/nbdb/0.log" Apr 16 17:22:34.721682 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:34.721644 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csjjh_6f3381a4-23e0-42e8-b782-d6d4e6915910/sbdb/0.log" Apr 16 17:22:34.897122 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:34.897039 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csjjh_6f3381a4-23e0-42e8-b782-d6d4e6915910/ovnkube-controller/0.log" Apr 16 17:22:36.165293 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:36.165239 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-225wk_b3cf9670-2b3f-4b96-aff6-4c414454a507/network-check-target-container/0.log" Apr 16 17:22:37.059551 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:37.059519 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-7962v_9adc281d-523f-4fc2-8784-22b5802e5ef5/iptables-alerter/0.log" Apr 16 17:22:37.721713 ip-10-0-128-173 kubenswrapper[2572]: I0416 17:22:37.721679 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-rvs52_fb61c011-2a8e-4af9-b55e-b16d5f329215/tuned/0.log"