Apr 24 16:45:16.301096 ip-10-0-129-204 systemd[1]: Starting Kubernetes Kubelet... Apr 24 16:45:16.691647 ip-10-0-129-204 kubenswrapper[2578]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 16:45:16.691647 ip-10-0-129-204 kubenswrapper[2578]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 24 16:45:16.691647 ip-10-0-129-204 kubenswrapper[2578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 16:45:16.691647 ip-10-0-129-204 kubenswrapper[2578]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 16:45:16.691647 ip-10-0-129-204 kubenswrapper[2578]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 16:45:16.693183 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.693112 2578 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 16:45:16.696239 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696224 2578 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 16:45:16.696239 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696238 2578 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696241 2578 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696246 2578 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696249 2578 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696252 2578 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696255 2578 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696258 2578 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696261 2578 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696264 2578 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696267 2578 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696270 2578 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696272 2578 feature_gate.go:328] unrecognized feature gate: Example Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696275 2578 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696278 2578 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696281 2578 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696283 2578 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696286 2578 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696294 2578 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696296 2578 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696299 2578 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 16:45:16.696303 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696302 2578 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696304 2578 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696336 2578 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696340 2578 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696343 2578 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696346 2578 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696349 2578 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696351 2578 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696354 2578 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696356 2578 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696359 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696361 2578 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696364 2578 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696366 2578 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696369 2578 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696371 2578 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696374 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696376 2578 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696379 2578 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696381 2578 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 16:45:16.696785 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696384 2578 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696386 2578 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696389 2578 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696392 2578 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696394 2578 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696397 2578 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696399 2578 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696402 2578 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696404 2578 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696407 2578 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696411 2578 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696415 2578 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696419 2578 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696422 2578 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696426 2578 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696429 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696432 2578 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696435 2578 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696437 2578 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 16:45:16.697292 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696440 2578 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696443 2578 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696446 2578 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696449 2578 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696451 2578 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696454 2578 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696458 2578 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696462 2578 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696465 2578 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696468 2578 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696471 2578 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696475 2578 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696477 2578 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696480 2578 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696483 2578 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696486 2578 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696489 2578 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696492 2578 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696495 2578 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 16:45:16.697903 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696498 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696501 2578 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696504 2578 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696507 2578 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696509 2578 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696513 2578 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696516 2578 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696875 2578 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696881 2578 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696884 2578 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696887 2578 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696889 2578 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696892 2578 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696895 2578 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696898 2578 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696901 2578 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696903 2578 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696905 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696908 2578 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696911 2578 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 16:45:16.698649 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696913 2578 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696916 2578 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696918 2578 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696921 2578 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696923 2578 feature_gate.go:328] unrecognized feature gate: Example Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696926 2578 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696928 2578 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696931 2578 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696933 2578 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696936 2578 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696939 2578 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696942 2578 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696944 2578 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696947 2578 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696949 2578 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696952 2578 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696954 2578 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696958 2578 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696960 2578 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696963 2578 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 16:45:16.699265 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696966 2578 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696968 2578 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696971 2578 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696974 2578 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696978 2578 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696981 2578 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696985 2578 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696988 2578 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696990 2578 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696992 2578 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696995 2578 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.696997 2578 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697000 2578 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697004 2578 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697008 2578 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697011 2578 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697013 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697016 2578 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697019 2578 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 16:45:16.699765 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697021 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697024 2578 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697027 2578 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697030 2578 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697032 2578 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697035 2578 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697037 2578 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697040 2578 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697042 2578 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697045 2578 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697047 2578 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697050 2578 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697053 2578 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697056 2578 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697059 2578 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697061 2578 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697064 2578 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697066 2578 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697069 2578 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697071 2578 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 16:45:16.700278 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697074 2578 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697076 2578 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697079 2578 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697081 2578 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697084 2578 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697087 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697090 2578 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697093 2578 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697095 2578 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697098 2578 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697101 2578 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697103 2578 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697106 2578 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.697108 2578 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697178 2578 flags.go:64] FLAG: --address="0.0.0.0" Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697184 2578 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697190 2578 flags.go:64] FLAG: --anonymous-auth="true" Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697195 2578 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697199 2578 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697202 2578 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697206 2578 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 24 16:45:16.700802 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697214 2578 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697218 2578 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697221 2578 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697224 2578 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697228 2578 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697231 2578 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697233 2578 flags.go:64] FLAG: --cgroup-root="" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697236 2578 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697239 2578 flags.go:64] FLAG: --client-ca-file="" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697242 2578 flags.go:64] FLAG: --cloud-config="" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697245 2578 flags.go:64] FLAG: --cloud-provider="external" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697248 2578 flags.go:64] FLAG: --cluster-dns="[]" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697251 2578 flags.go:64] FLAG: --cluster-domain="" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697254 2578 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697257 2578 flags.go:64] FLAG: --config-dir="" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697260 2578 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697263 2578 flags.go:64] FLAG: --container-log-max-files="5" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697267 2578 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697270 2578 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697273 2578 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697276 2578 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697279 2578 flags.go:64] FLAG: --contention-profiling="false" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697282 2578 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697285 2578 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697288 2578 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 24 16:45:16.701325 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697294 2578 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697298 2578 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697301 2578 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697304 2578 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697307 2578 flags.go:64] FLAG: --enable-load-reader="false" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697310 2578 flags.go:64] FLAG: --enable-server="true" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697313 2578 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697317 2578 flags.go:64] FLAG: --event-burst="100" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697320 2578 flags.go:64] FLAG: --event-qps="50" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697323 2578 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697326 2578 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697336 2578 flags.go:64] FLAG: --eviction-hard="" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697344 2578 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697347 2578 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697349 2578 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697353 2578 flags.go:64] FLAG: --eviction-soft="" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697356 2578 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697359 2578 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697361 2578 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697365 2578 flags.go:64] FLAG: --experimental-mounter-path="" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697368 2578 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697371 2578 flags.go:64] FLAG: --fail-swap-on="true" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697373 2578 flags.go:64] FLAG: --feature-gates="" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697377 2578 flags.go:64] FLAG: --file-check-frequency="20s" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697380 2578 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 24 16:45:16.701943 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697383 2578 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697386 2578 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697389 2578 flags.go:64] FLAG: --healthz-port="10248" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697392 2578 flags.go:64] FLAG: --help="false" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697395 2578 flags.go:64] FLAG: --hostname-override="ip-10-0-129-204.ec2.internal" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697397 2578 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697400 2578 flags.go:64] FLAG: --http-check-frequency="20s" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697404 2578 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697407 2578 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697410 2578 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697413 2578 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697416 2578 flags.go:64] FLAG: --image-service-endpoint="" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697418 2578 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697421 2578 flags.go:64] FLAG: --kube-api-burst="100" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697424 2578 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697427 2578 flags.go:64] FLAG: --kube-api-qps="50" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697430 2578 flags.go:64] FLAG: --kube-reserved="" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697433 2578 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697436 2578 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697439 2578 flags.go:64] FLAG: --kubelet-cgroups="" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697442 2578 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697444 2578 flags.go:64] FLAG: --lock-file="" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697448 2578 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697451 2578 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 24 16:45:16.702588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697454 2578 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697459 2578 flags.go:64] FLAG: --log-json-split-stream="false" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697462 2578 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697464 2578 flags.go:64] FLAG: --log-text-split-stream="false" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697467 2578 flags.go:64] FLAG: --logging-format="text" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697470 2578 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697473 2578 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697476 2578 flags.go:64] FLAG: --manifest-url="" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697478 2578 flags.go:64] FLAG: --manifest-url-header="" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697482 2578 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697485 2578 flags.go:64] FLAG: --max-open-files="1000000" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697489 2578 flags.go:64] FLAG: --max-pods="110" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697492 2578 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697495 2578 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697498 2578 flags.go:64] FLAG: --memory-manager-policy="None" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697501 2578 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697504 2578 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697507 2578 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697510 2578 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697516 2578 flags.go:64] FLAG: --node-status-max-images="50" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697519 2578 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697522 2578 flags.go:64] FLAG: --oom-score-adj="-999" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697525 2578 flags.go:64] FLAG: --pod-cidr="" Apr 24 16:45:16.703209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697528 2578 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697773 2578 flags.go:64] FLAG: --pod-manifest-path="" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697783 2578 flags.go:64] FLAG: --pod-max-pids="-1" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697792 2578 flags.go:64] FLAG: --pods-per-core="0" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697801 2578 flags.go:64] FLAG: --port="10250" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697826 2578 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697836 2578 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-029643dfae6461540" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697853 2578 flags.go:64] FLAG: --qos-reserved="" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697863 2578 flags.go:64] FLAG: --read-only-port="10255" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697878 2578 flags.go:64] FLAG: --register-node="true" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697886 2578 flags.go:64] FLAG: --register-schedulable="true" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697894 2578 flags.go:64] FLAG: --register-with-taints="" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697903 2578 flags.go:64] FLAG: --registry-burst="10" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697914 2578 flags.go:64] FLAG: --registry-qps="5" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697922 2578 flags.go:64] FLAG: --reserved-cpus="" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697931 2578 flags.go:64] FLAG: --reserved-memory="" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697940 2578 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697948 2578 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697962 2578 flags.go:64] FLAG: --rotate-certificates="false" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697970 2578 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697978 2578 flags.go:64] FLAG: --runonce="false" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697986 2578 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.697994 2578 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698003 2578 flags.go:64] FLAG: --seccomp-default="false" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698011 2578 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698019 2578 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 24 16:45:16.703799 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698032 2578 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698041 2578 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698049 2578 flags.go:64] FLAG: --storage-driver-password="root" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698056 2578 flags.go:64] FLAG: --storage-driver-secure="false" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698065 2578 flags.go:64] FLAG: --storage-driver-table="stats" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698073 2578 flags.go:64] FLAG: --storage-driver-user="root" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698080 2578 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698088 2578 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698096 2578 flags.go:64] FLAG: --system-cgroups="" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698109 2578 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698126 2578 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698134 2578 flags.go:64] FLAG: --tls-cert-file="" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698143 2578 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698154 2578 flags.go:64] FLAG: --tls-min-version="" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698162 2578 flags.go:64] FLAG: --tls-private-key-file="" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698177 2578 flags.go:64] FLAG: --topology-manager-policy="none" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698185 2578 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698192 2578 flags.go:64] FLAG: --topology-manager-scope="container" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698197 2578 flags.go:64] FLAG: --v="2" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698204 2578 flags.go:64] FLAG: --version="false" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698210 2578 flags.go:64] FLAG: --vmodule="" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698217 2578 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.698222 2578 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698528 2578 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 16:45:16.704454 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698537 2578 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698541 2578 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698546 2578 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698551 2578 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698556 2578 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698561 2578 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698565 2578 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698569 2578 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698574 2578 feature_gate.go:328] unrecognized feature gate: Example Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698583 2578 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698587 2578 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698591 2578 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698595 2578 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698600 2578 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698604 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698608 2578 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698612 2578 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698616 2578 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698620 2578 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698624 2578 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 16:45:16.705153 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698628 2578 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698633 2578 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698642 2578 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698646 2578 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698650 2578 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698655 2578 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698659 2578 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698663 2578 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698667 2578 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698672 2578 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698676 2578 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698680 2578 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698684 2578 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698688 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698693 2578 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698704 2578 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698709 2578 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698713 2578 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698717 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698721 2578 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 16:45:16.705653 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698725 2578 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698729 2578 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698733 2578 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698737 2578 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698741 2578 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698745 2578 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698749 2578 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698758 2578 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698762 2578 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698766 2578 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698770 2578 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698774 2578 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698778 2578 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698782 2578 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698787 2578 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698791 2578 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698795 2578 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698799 2578 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698819 2578 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.698824 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 16:45:16.706168 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699099 2578 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699213 2578 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699220 2578 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699225 2578 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699230 2578 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699235 2578 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699239 2578 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699244 2578 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699248 2578 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699252 2578 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699256 2578 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699260 2578 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699264 2578 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699268 2578 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699272 2578 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699280 2578 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699287 2578 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699291 2578 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699296 2578 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 16:45:16.706667 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699300 2578 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699304 2578 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699308 2578 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699312 2578 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699316 2578 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.699320 2578 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.699331 2578 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.705975 2578 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.705988 2578 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706036 2578 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706041 2578 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706044 2578 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706047 2578 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706050 2578 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706053 2578 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706056 2578 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 16:45:16.707174 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706059 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706062 2578 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706065 2578 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706067 2578 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706070 2578 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706073 2578 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706076 2578 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706078 2578 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706081 2578 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706083 2578 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706085 2578 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706088 2578 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706091 2578 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706093 2578 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706095 2578 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706098 2578 feature_gate.go:328] unrecognized feature gate: Example Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706101 2578 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706104 2578 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706106 2578 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706109 2578 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 16:45:16.707610 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706112 2578 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706115 2578 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706117 2578 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706120 2578 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706122 2578 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706124 2578 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706127 2578 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706129 2578 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706132 2578 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706135 2578 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706137 2578 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706140 2578 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706142 2578 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706145 2578 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706147 2578 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706150 2578 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706152 2578 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706155 2578 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706160 2578 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 16:45:16.708154 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706163 2578 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706166 2578 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706169 2578 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706172 2578 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706175 2578 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706178 2578 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706181 2578 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706183 2578 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706186 2578 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706188 2578 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706190 2578 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706193 2578 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706195 2578 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706198 2578 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706200 2578 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706203 2578 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706205 2578 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706207 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706210 2578 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 16:45:16.708634 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706212 2578 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706216 2578 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706219 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706223 2578 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706226 2578 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706229 2578 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706231 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706234 2578 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706237 2578 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706239 2578 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706242 2578 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706245 2578 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706247 2578 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706250 2578 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706252 2578 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706255 2578 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706257 2578 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706260 2578 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706262 2578 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 16:45:16.709111 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706265 2578 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706267 2578 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.706272 2578 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706359 2578 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706364 2578 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706367 2578 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706370 2578 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706372 2578 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706375 2578 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706377 2578 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706380 2578 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706383 2578 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706385 2578 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706388 2578 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706390 2578 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706393 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 16:45:16.709594 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706396 2578 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706398 2578 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706401 2578 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706403 2578 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706406 2578 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706409 2578 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706411 2578 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706414 2578 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706417 2578 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706419 2578 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706422 2578 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706425 2578 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706427 2578 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706429 2578 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706432 2578 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706434 2578 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706437 2578 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706440 2578 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706442 2578 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706445 2578 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 16:45:16.710014 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706447 2578 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706449 2578 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706452 2578 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706454 2578 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706457 2578 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706459 2578 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706461 2578 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706464 2578 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706466 2578 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706469 2578 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706471 2578 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706473 2578 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706476 2578 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706479 2578 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706481 2578 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706486 2578 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706489 2578 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706493 2578 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706496 2578 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 16:45:16.710527 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706499 2578 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706501 2578 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706504 2578 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706507 2578 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706511 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706513 2578 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706516 2578 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706518 2578 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706521 2578 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706523 2578 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706525 2578 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706528 2578 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706530 2578 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706533 2578 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706536 2578 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706538 2578 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706541 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706543 2578 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706546 2578 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 16:45:16.711035 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706548 2578 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706551 2578 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706553 2578 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706556 2578 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706558 2578 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706561 2578 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706564 2578 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706566 2578 feature_gate.go:328] unrecognized feature gate: Example Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706569 2578 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706571 2578 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706574 2578 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706576 2578 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706580 2578 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706582 2578 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:16.706585 2578 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.706590 2578 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 24 16:45:16.711490 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.707430 2578 server.go:962] "Client rotation is on, will bootstrap in background" Apr 24 16:45:16.711928 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.709262 2578 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 24 16:45:16.711928 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.710116 2578 server.go:1019] "Starting client certificate rotation" Apr 24 16:45:16.711928 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.710212 2578 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 24 16:45:16.711928 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.710254 2578 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 24 16:45:16.732940 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.732923 2578 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 24 16:45:16.736671 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.736657 2578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 24 16:45:16.748890 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.748874 2578 log.go:25] "Validated CRI v1 runtime API" Apr 24 16:45:16.754343 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.754328 2578 log.go:25] "Validated CRI v1 image API" Apr 24 16:45:16.755636 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.755605 2578 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 16:45:16.760610 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.760590 2578 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/nvme0n1p2 c0d45b93-9160-47a8-b540-9ed6773a37ff:/dev/nvme0n1p3 dca749c1-4d69-4fbb-9aac-f60e1908cba0:/dev/nvme0n1p4] Apr 24 16:45:16.760676 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.760610 2578 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 24 16:45:16.766993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.766874 2578 manager.go:217] Machine: {Timestamp:2026-04-24 16:45:16.764788358 +0000 UTC m=+0.361154638 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3097406 MemoryCapacity:33164496896 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2d6ff5058ee8c433d5e0c61ffc6496 SystemUUID:ec2d6ff5-058e-e8c4-33d5-e0c61ffc6496 BootID:6a7437fa-7e6d-4171-97a6-2bc5199af8c8 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632902656 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582250496 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:b7:e4:01:31:d3 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:b7:e4:01:31:d3 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:26:94:b9:0c:33:f2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164496896 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 24 16:45:16.767624 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.767613 2578 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 24 16:45:16.767696 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.767685 2578 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 24 16:45:16.768785 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.768759 2578 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 16:45:16.768924 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.768787 2578 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-129-204.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 16:45:16.768968 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.768933 2578 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 16:45:16.768968 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.768941 2578 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 16:45:16.768968 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.768954 2578 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 24 16:45:16.770390 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.770379 2578 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 24 16:45:16.771552 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.771542 2578 state_mem.go:36] "Initialized new in-memory state store" Apr 24 16:45:16.771649 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.771640 2578 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 24 16:45:16.774631 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.774621 2578 kubelet.go:491] "Attempting to sync node with API server" Apr 24 16:45:16.774671 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.774637 2578 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 16:45:16.774671 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.774651 2578 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 24 16:45:16.774671 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.774660 2578 kubelet.go:397] "Adding apiserver pod source" Apr 24 16:45:16.774671 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.774668 2578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 16:45:16.775648 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.775633 2578 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 24 16:45:16.775689 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.775661 2578 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 24 16:45:16.778386 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.778371 2578 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 24 16:45:16.779959 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.779946 2578 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 16:45:16.780645 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780634 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 24 16:45:16.780686 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780650 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 24 16:45:16.780686 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780656 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 24 16:45:16.780686 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780662 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 24 16:45:16.780686 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780668 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 24 16:45:16.780686 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780674 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 24 16:45:16.780686 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780679 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 24 16:45:16.780686 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780685 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 24 16:45:16.780887 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780691 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 24 16:45:16.780887 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780698 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 24 16:45:16.780887 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.780711 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 24 16:45:16.781091 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.781082 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 24 16:45:16.781869 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.781860 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 24 16:45:16.781869 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.781868 2578 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 24 16:45:16.785165 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.785153 2578 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 16:45:16.785212 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.785186 2578 server.go:1295] "Started kubelet" Apr 24 16:45:16.785298 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.785270 2578 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 16:45:16.785345 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.785282 2578 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 16:45:16.785390 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.785374 2578 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 24 16:45:16.785871 ip-10-0-129-204 systemd[1]: Started Kubernetes Kubelet. Apr 24 16:45:16.786621 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.786604 2578 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 16:45:16.786977 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.786965 2578 server.go:317] "Adding debug handlers to kubelet server" Apr 24 16:45:16.793566 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:16.793548 2578 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 24 16:45:16.794834 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.794798 2578 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 24 16:45:16.795363 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.795350 2578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 16:45:16.796078 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796061 2578 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 24 16:45:16.796078 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796077 2578 factory.go:55] Registering systemd factory Apr 24 16:45:16.796201 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796091 2578 factory.go:223] Registration of the systemd container factory successfully Apr 24 16:45:16.796201 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796111 2578 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 24 16:45:16.796201 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796128 2578 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 16:45:16.796201 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796114 2578 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 16:45:16.796201 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796178 2578 reconstruct.go:97] "Volume reconstruction finished" Apr 24 16:45:16.796201 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796185 2578 reconciler.go:26] "Reconciler: start to sync state" Apr 24 16:45:16.796422 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:16.796291 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:16.796422 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796297 2578 factory.go:153] Registering CRI-O factory Apr 24 16:45:16.796422 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796317 2578 factory.go:223] Registration of the crio container factory successfully Apr 24 16:45:16.796422 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796353 2578 factory.go:103] Registering Raw factory Apr 24 16:45:16.796422 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796369 2578 manager.go:1196] Started watching for new ooms in manager Apr 24 16:45:16.796745 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.796735 2578 manager.go:319] Starting recovery of all containers Apr 24 16:45:16.808312 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.808220 2578 manager.go:324] Recovery completed Apr 24 16:45:16.812187 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.812176 2578 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 16:45:16.897102 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:16.897080 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:16.914566 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.914531 2578 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 16:45:16.915713 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.915692 2578 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 16:45:16.915713 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.915715 2578 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 16:45:16.915880 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.915730 2578 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 16:45:16.915880 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:16.915737 2578 kubelet.go:2451] "Starting kubelet main sync loop" Apr 24 16:45:16.915880 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:16.915764 2578 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 16:45:16.997440 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:16.997397 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:17.016562 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.016538 2578 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 16:45:17.097929 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.097910 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:17.198318 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.198293 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:17.217457 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.217435 2578 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 16:45:17.299129 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.299074 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:17.399445 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.399421 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:17.499844 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.499818 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:17.600303 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.600260 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:17.618436 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.618414 2578 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 16:45:17.700768 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.700744 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:17.801676 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.801652 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:17.902110 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:17.902058 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:18.002306 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.002283 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:18.102659 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.102632 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:18.203133 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.203083 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:18.303696 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.303665 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:18.404114 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.404090 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:18.419251 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.419234 2578 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 16:45:18.504603 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.504561 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:18.605023 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.604996 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:18.705382 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.705357 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:18.806186 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.806137 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:18.906511 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:18.906491 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:19.006580 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:19.006554 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:19.107043 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:19.107002 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:19.207388 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:19.207359 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:19.308114 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:19.308086 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:19.408491 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:19.408446 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:19.508862 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:19.508840 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:19.609281 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:19.609261 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:19.709657 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:19.709615 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:19.810383 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:19.810361 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:19.910722 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:19.910702 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:20.010828 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.010771 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:20.019932 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.019914 2578 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 16:45:20.111334 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.111316 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:20.211703 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.211676 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:20.312370 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.312323 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:20.412685 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.412662 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:20.513083 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.513059 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:20.613446 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.613404 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:20.713914 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.713891 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:20.814550 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.814522 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:20.915049 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:20.914992 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:21.015189 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:21.015161 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:21.115536 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:21.115511 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:21.215920 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:21.215872 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:21.316491 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:21.316467 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:21.416864 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:21.416842 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:21.517286 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:21.517248 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:21.617600 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:21.617581 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:21.718252 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:21.718232 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:21.818504 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:21.818457 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:21.918780 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:21.918757 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:22.019844 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:22.019803 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:22.120265 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:22.120211 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:22.220571 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:22.220543 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:22.321241 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:22.321219 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:22.421592 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:22.421550 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:22.521978 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:22.521958 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:22.622323 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:22.622300 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:22.722870 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:22.722797 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:22.823493 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:22.823470 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:22.923604 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:22.923581 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:23.024069 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.024010 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:23.124367 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.124342 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:23.220665 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.220641 2578 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 16:45:23.224791 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.224776 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:23.325384 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.325334 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:23.425692 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.425670 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:23.526109 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.526087 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:23.626448 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.626411 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:23.726934 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.726909 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:23.827746 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.827722 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:23.928035 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:23.927979 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:24.028343 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:24.028319 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:24.128667 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:24.128646 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:24.229119 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:24.229075 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:24.329676 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:24.329652 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:24.430038 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:24.430021 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:24.530379 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:24.530341 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:24.630679 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:24.630661 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:24.731241 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:24.731224 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:24.832069 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:24.832025 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:24.932122 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:24.932099 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:25.032446 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:25.032416 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:25.132822 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:25.132762 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:25.233169 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:25.233148 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:25.333647 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:25.333622 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:25.434069 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:25.434024 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:25.534379 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:25.534353 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:25.634697 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:25.634676 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:25.735415 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:25.735372 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:25.836098 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:25.836070 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:25.936279 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:25.936258 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.036627 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.036584 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.137028 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.137008 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.237507 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.237482 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.338069 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.338023 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.438364 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.438340 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.538681 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.538659 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.639131 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.639089 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.739677 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.739652 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.740346 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.740290 2578 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/storage.k8s.io/v1/csinodes/ip-10-0-129-204.ec2.internal?resourceVersion=0": dial tcp: lookup ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com on 10.0.0.2:53: read udp 10.0.129.204:56153->10.0.0.2:53: i/o timeout Apr 24 16:45:26.740444 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.740350 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp: lookup ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com on 10.0.0.2:53: read udp 10.0.129.204:56153->10.0.0.2:53: i/o timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 16:45:26.740444 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.740375 2578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": dial tcp: lookup ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com on 10.0.0.2:53: read udp 10.0.129.204:56153->10.0.0.2:53: i/o timeout" interval="200ms" Apr 24 16:45:26.740444 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.740398 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp: lookup ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com on 10.0.0.2:53: read udp 10.0.129.204:56153->10.0.0.2:53: i/o timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 16:45:26.740444 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.740389 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp: lookup ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com on 10.0.0.2:53: read udp 10.0.129.204:56153->10.0.0.2:53: i/o timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 16:45:26.740444 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.740348 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-10-0-129-204.ec2.internal&limit=500&resourceVersion=0\": dial tcp: lookup ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com on 10.0.0.2:53: read udp 10.0.129.204:56153->10.0.0.2:53: i/o timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 16:45:26.740444 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.740418 2578 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp: lookup ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com on 10.0.0.2:53: read udp 10.0.129.204:56153->10.0.0.2:53: i/o timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 16:45:26.741183 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.740271 2578 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/default/events\": dial tcp: lookup ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com on 10.0.0.2:53: read udp 10.0.129.204:56153->10.0.0.2:53: i/o timeout" event="&Event{ObjectMeta:{ip-10-0-129-204.ec2.internal.18a958c1aa66bf31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-129-204.ec2.internal,UID:ip-10-0-129-204.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-129-204.ec2.internal,},FirstTimestamp:2026-04-24 16:45:16.785164081 +0000 UTC m=+0.381530357,LastTimestamp:2026-04-24 16:45:16.785164081 +0000 UTC m=+0.381530357,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-129-204.ec2.internal,}" Apr 24 16:45:26.815607 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.815573 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientMemory" Apr 24 16:45:26.815698 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.815620 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 16:45:26.815698 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.815637 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientPID" Apr 24 16:45:26.816246 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.816226 2578 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 24 16:45:26.816246 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.816240 2578 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 24 16:45:26.816378 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.816262 2578 state_mem.go:36] "Initialized new in-memory state store" Apr 24 16:45:26.818395 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.818381 2578 policy_none.go:49] "None policy: Start" Apr 24 16:45:26.818450 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.818399 2578 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 16:45:26.818450 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.818410 2578 state_mem.go:35] "Initializing new in-memory state store" Apr 24 16:45:26.839769 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.839748 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.854377 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.854352 2578 manager.go:341] "Starting Device Plugin manager" Apr 24 16:45:26.854464 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.854390 2578 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 16:45:26.854464 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.854403 2578 server.go:85] "Starting device plugin registration server" Apr 24 16:45:26.854706 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.854693 2578 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 16:45:26.854746 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.854709 2578 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 16:45:26.854881 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.854856 2578 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 24 16:45:26.854996 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.854966 2578 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 24 16:45:26.854996 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.854978 2578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 16:45:26.855512 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.855495 2578 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 24 16:45:26.855593 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.855534 2578 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:26.955552 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.955502 2578 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 16:45:26.958056 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.958034 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientMemory" Apr 24 16:45:26.958122 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.958061 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 16:45:26.958122 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.958078 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientPID" Apr 24 16:45:26.958122 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:26.958099 2578 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:26.964192 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.964171 2578 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-129-204.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Apr 24 16:45:26.968402 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:26.968382 2578 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-129-204.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:27.169067 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.169043 2578 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 16:45:27.169877 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.169859 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientMemory" Apr 24 16:45:27.169947 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.169888 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 16:45:27.169947 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.169905 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientPID" Apr 24 16:45:27.169947 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.169925 2578 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:27.180732 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:27.180709 2578 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-129-204.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:27.378443 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:27.378392 2578 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-129-204.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Apr 24 16:45:27.581722 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.581702 2578 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 16:45:27.582475 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.582449 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientMemory" Apr 24 16:45:27.582540 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.582481 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 16:45:27.582540 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.582495 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientPID" Apr 24 16:45:27.582540 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.582515 2578 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:27.601968 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:27.601950 2578 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-129-204.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:27.703983 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:27.703930 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-129-204.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 16:45:27.749307 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:27.749289 2578 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-129-204.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 24 16:45:27.965079 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:27.965020 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 16:45:27.975550 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:27.975527 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 16:45:28.189066 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:28.189039 2578 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-129-204.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Apr 24 16:45:28.189216 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:28.189129 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 16:45:28.221683 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.221631 2578 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal"] Apr 24 16:45:28.221745 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.221704 2578 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 16:45:28.223276 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.223259 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientMemory" Apr 24 16:45:28.223365 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.223294 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 16:45:28.223365 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.223305 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientPID" Apr 24 16:45:28.225680 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.225666 2578 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 16:45:28.225864 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.225826 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.225909 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.225879 2578 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 16:45:28.226535 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.226513 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientMemory" Apr 24 16:45:28.226638 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.226534 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientMemory" Apr 24 16:45:28.226638 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.226549 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 16:45:28.226638 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.226562 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 16:45:28.226638 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.226574 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientPID" Apr 24 16:45:28.226638 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.226564 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientPID" Apr 24 16:45:28.228779 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.228758 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.228883 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.228788 2578 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 16:45:28.229584 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.229565 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientMemory" Apr 24 16:45:28.229692 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.229596 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 16:45:28.229692 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.229613 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientPID" Apr 24 16:45:28.258568 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:28.258542 2578 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-129-204.ec2.internal\" not found" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.263388 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:28.263372 2578 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-129-204.ec2.internal\" not found" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.345861 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.345833 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/684f5b9427f067db6bbddee483185b81-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal\" (UID: \"684f5b9427f067db6bbddee483185b81\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.345922 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.345867 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/684f5b9427f067db6bbddee483185b81-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal\" (UID: \"684f5b9427f067db6bbddee483185b81\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.345922 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.345885 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/dd54af8f80e3b05db4203800e6cae347-config\") pod \"kube-apiserver-proxy-ip-10-0-129-204.ec2.internal\" (UID: \"dd54af8f80e3b05db4203800e6cae347\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.402135 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.402117 2578 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 16:45:28.403081 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.403056 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientMemory" Apr 24 16:45:28.403194 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.403090 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 16:45:28.403194 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.403104 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientPID" Apr 24 16:45:28.403194 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.403127 2578 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.415244 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:28.415221 2578 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-129-204.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.446432 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.446406 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/684f5b9427f067db6bbddee483185b81-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal\" (UID: \"684f5b9427f067db6bbddee483185b81\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.446506 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.446441 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/dd54af8f80e3b05db4203800e6cae347-config\") pod \"kube-apiserver-proxy-ip-10-0-129-204.ec2.internal\" (UID: \"dd54af8f80e3b05db4203800e6cae347\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.446506 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.446458 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/684f5b9427f067db6bbddee483185b81-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal\" (UID: \"684f5b9427f067db6bbddee483185b81\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.446506 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.446485 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/684f5b9427f067db6bbddee483185b81-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal\" (UID: \"684f5b9427f067db6bbddee483185b81\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.446600 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.446514 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/684f5b9427f067db6bbddee483185b81-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal\" (UID: \"684f5b9427f067db6bbddee483185b81\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.446600 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.446514 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/dd54af8f80e3b05db4203800e6cae347-config\") pod \"kube-apiserver-proxy-ip-10-0-129-204.ec2.internal\" (UID: \"dd54af8f80e3b05db4203800e6cae347\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.562832 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.562772 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.566458 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.566442 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal" Apr 24 16:45:28.755383 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.755356 2578 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-129-204.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 24 16:45:28.817139 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.817078 2578 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 24 16:45:28.838103 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.838076 2578 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 24 16:45:28.873629 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.873608 2578 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-vhf2g" Apr 24 16:45:28.880729 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:28.880710 2578 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-vhf2g" Apr 24 16:45:29.131504 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:29.131474 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd54af8f80e3b05db4203800e6cae347.slice/crio-c1c9e815ff88c21ddf1106193d239aac66bb5efdd22158df10bf8cb712a7f91d WatchSource:0}: Error finding container c1c9e815ff88c21ddf1106193d239aac66bb5efdd22158df10bf8cb712a7f91d: Status 404 returned error can't find the container with id c1c9e815ff88c21ddf1106193d239aac66bb5efdd22158df10bf8cb712a7f91d Apr 24 16:45:29.131726 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:29.131706 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684f5b9427f067db6bbddee483185b81.slice/crio-bc4409752f7538927c546eec823ccb5458e13dd843a79f56b435e676190f45df WatchSource:0}: Error finding container bc4409752f7538927c546eec823ccb5458e13dd843a79f56b435e676190f45df: Status 404 returned error can't find the container with id bc4409752f7538927c546eec823ccb5458e13dd843a79f56b435e676190f45df Apr 24 16:45:29.136070 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:29.136055 2578 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 16:45:29.710171 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:29.710080 2578 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 24 16:45:29.758305 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:29.758284 2578 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-129-204.ec2.internal" not found Apr 24 16:45:29.775220 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:29.775197 2578 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-129-204.ec2.internal" not found Apr 24 16:45:29.795732 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:29.795708 2578 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-129-204.ec2.internal\" not found" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:29.837841 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:29.837821 2578 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-129-204.ec2.internal" not found Apr 24 16:45:29.883271 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:29.883229 2578 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-23 16:40:28 +0000 UTC" deadline="2027-10-26 04:44:28.902209595 +0000 UTC" Apr 24 16:45:29.883271 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:29.883271 2578 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="13187h58m59.018943039s" Apr 24 16:45:29.934217 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:29.934168 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal" event={"ID":"dd54af8f80e3b05db4203800e6cae347","Type":"ContainerStarted","Data":"c1c9e815ff88c21ddf1106193d239aac66bb5efdd22158df10bf8cb712a7f91d"} Apr 24 16:45:29.935371 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:29.935349 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" event={"ID":"684f5b9427f067db6bbddee483185b81","Type":"ContainerStarted","Data":"bc4409752f7538927c546eec823ccb5458e13dd843a79f56b435e676190f45df"} Apr 24 16:45:30.015650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.015587 2578 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 16:45:30.016561 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.016540 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientMemory" Apr 24 16:45:30.016679 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.016577 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 16:45:30.016679 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.016595 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeHasSufficientPID" Apr 24 16:45:30.016679 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.016630 2578 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:30.026596 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.026568 2578 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-129-204.ec2.internal" Apr 24 16:45:30.026696 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:30.026606 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-129-204.ec2.internal\": node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:30.050153 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.050132 2578 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 16:45:30.070686 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:30.070663 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:30.171648 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:30.171622 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:30.272321 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:30.272265 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:30.372949 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:30.372918 2578 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-204.ec2.internal\" not found" Apr 24 16:45:30.458438 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.458408 2578 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 16:45:30.496204 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.496182 2578 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" Apr 24 16:45:30.565085 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.565024 2578 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 16:45:30.565209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.565143 2578 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal" Apr 24 16:45:30.581096 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.581070 2578 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 16:45:30.609596 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.609577 2578 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 16:45:30.782357 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.782328 2578 apiserver.go:52] "Watching apiserver" Apr 24 16:45:30.792074 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.792053 2578 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 24 16:45:30.794179 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.792763 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/konnectivity-agent-jk7f4","kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j","openshift-dns/node-resolver-7v56j","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal","openshift-monitoring/node-exporter-flc22","openshift-multus/multus-fsp54","openshift-multus/network-metrics-daemon-8jmlx","openshift-cluster-node-tuning-operator/tuned-gfrd2","openshift-image-registry/node-ca-lhmfh","openshift-multus/multus-additional-cni-plugins-nvmzv","openshift-network-diagnostics/network-check-target-rcps7","openshift-network-operator/iptables-alerter-5h96f","openshift-ovn-kubernetes/ovnkube-node-9d4p6"] Apr 24 16:45:30.796791 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.796770 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:30.799609 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.799589 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 24 16:45:30.799700 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.799622 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-lcsbh\"" Apr 24 16:45:30.799700 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.799624 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 24 16:45:30.801536 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.801518 2578 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 24 16:45:30.802224 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.802203 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:30.802457 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.802339 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:30.804427 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.804409 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-8jfrg\"" Apr 24 16:45:30.804944 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.804925 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-wwhfn\"" Apr 24 16:45:30.805062 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.805046 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 24 16:45:30.805192 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.805177 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:30.805523 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.805505 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 24 16:45:30.805523 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.805519 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 24 16:45:30.805675 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.805569 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 24 16:45:30.805675 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.805642 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 24 16:45:30.807684 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.807661 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 24 16:45:30.808253 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.808233 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.808365 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.808259 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-fvkdr\"" Apr 24 16:45:30.808365 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.808278 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 24 16:45:30.808580 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.808561 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 24 16:45:30.808851 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.808723 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 24 16:45:30.808851 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.808800 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 24 16:45:30.808851 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.808831 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 24 16:45:30.810727 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.810708 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:30.810817 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:30.810788 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:30.810909 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.810896 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 24 16:45:30.810956 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.810899 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 24 16:45:30.811392 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.811376 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-bbbr5\"" Apr 24 16:45:30.811984 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.811967 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 24 16:45:30.812059 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.812009 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 24 16:45:30.813046 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.813031 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.815198 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.815150 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:30.815198 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.815170 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 24 16:45:30.815476 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.815462 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 24 16:45:30.815674 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.815659 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-z9jzk\"" Apr 24 16:45:30.816210 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.816194 2578 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 24 16:45:30.817298 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.817283 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 24 16:45:30.817375 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.817306 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:30.817729 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.817712 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-mlsxz\"" Apr 24 16:45:30.817729 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.817727 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 24 16:45:30.817858 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.817834 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 24 16:45:30.819569 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.819547 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:30.819668 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:30.819600 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:30.819733 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.819712 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-crgbx\"" Apr 24 16:45:30.819788 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.819730 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 24 16:45:30.819788 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.819722 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 24 16:45:30.821758 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.821743 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:30.824782 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.824658 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 24 16:45:30.824878 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.824858 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 24 16:45:30.825001 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.824982 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-9h4fl\"" Apr 24 16:45:30.825140 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.825117 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 24 16:45:30.825650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.825636 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.828173 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.828142 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 24 16:45:30.828173 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.828144 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 24 16:45:30.829013 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.828996 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-7wgt2\"" Apr 24 16:45:30.829142 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.829038 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 24 16:45:30.829846 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.829829 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 24 16:45:30.829923 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.829858 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 24 16:45:30.829923 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.829891 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 24 16:45:30.839483 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.839467 2578 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-9scrv" Apr 24 16:45:30.852730 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.852716 2578 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-9scrv" Apr 24 16:45:30.856169 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.856151 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/5e89d705-97ba-4bce-a2d2-d806b5547f4f-agent-certs\") pod \"konnectivity-agent-jk7f4\" (UID: \"5e89d705-97ba-4bce-a2d2-d806b5547f4f\") " pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:30.856309 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.856177 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/5e89d705-97ba-4bce-a2d2-d806b5547f4f-konnectivity-ca\") pod \"konnectivity-agent-jk7f4\" (UID: \"5e89d705-97ba-4bce-a2d2-d806b5547f4f\") " pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:30.897452 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.897433 2578 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 16:45:30.942225 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.942099 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal" event={"ID":"dd54af8f80e3b05db4203800e6cae347","Type":"ContainerStarted","Data":"00b577bdcba376bb218aa02975fee18a739660e73feb5f3236853391d0e87e3b"} Apr 24 16:45:30.956685 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956664 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9m8p\" (UniqueName: \"kubernetes.io/projected/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-kube-api-access-g9m8p\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:30.956762 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956693 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/faec62ed-4955-41ae-96c6-7fa5fab7f996-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:30.956762 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956711 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-var-lib-openvswitch\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.956762 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956730 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-run-ovn\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.956762 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956752 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7c48d729-e644-4376-b836-4a516c44c4d6-hosts-file\") pod \"node-resolver-7v56j\" (UID: \"7c48d729-e644-4376-b836-4a516c44c4d6\") " pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:30.956946 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956774 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-host\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.956946 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956831 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/5e89d705-97ba-4bce-a2d2-d806b5547f4f-konnectivity-ca\") pod \"konnectivity-agent-jk7f4\" (UID: \"5e89d705-97ba-4bce-a2d2-d806b5547f4f\") " pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:30.956946 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956862 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-cni-netd\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.956946 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956878 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjl7\" (UniqueName: \"kubernetes.io/projected/af10d12d-291a-41fd-8854-3c5ffc4322a3-kube-api-access-fjjl7\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.956946 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956898 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4baee54d-0178-40b3-b0ce-ba751c0fbd26-tmp\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.956946 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956931 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-registration-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:30.957173 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956964 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-textfile\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:30.957173 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.956987 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cph2b\" (UniqueName: \"kubernetes.io/projected/f25169f2-3731-4f98-a3ff-cea42487c5e1-kube-api-access-cph2b\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:30.957173 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957028 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-os-release\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:30.957173 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957090 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-run-systemd\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.957173 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957114 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:30.957173 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957130 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-sysconfig\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.957173 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957145 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-wtmp\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:30.957173 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957163 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f25169f2-3731-4f98-a3ff-cea42487c5e1-root\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957208 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/5e89d705-97ba-4bce-a2d2-d806b5547f4f-agent-certs\") pod \"konnectivity-agent-jk7f4\" (UID: \"5e89d705-97ba-4bce-a2d2-d806b5547f4f\") " pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957252 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-os-release\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957279 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-hostroot\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957301 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-systemd\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957324 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-sys\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957347 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-lib-modules\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957375 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957395 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-run-ovn-kubernetes\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957408 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-daemon-config\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957409 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/5e89d705-97ba-4bce-a2d2-d806b5547f4f-konnectivity-ca\") pod \"konnectivity-agent-jk7f4\" (UID: \"5e89d705-97ba-4bce-a2d2-d806b5547f4f\") " pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957422 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-run-multus-certs\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.957455 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957444 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gxs5\" (UniqueName: \"kubernetes.io/projected/dda8f1f0-9635-43d2-9f82-9831f8800481-kube-api-access-4gxs5\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957486 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-run\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957521 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rm7t\" (UniqueName: \"kubernetes.io/projected/4baee54d-0178-40b3-b0ce-ba751c0fbd26-kube-api-access-5rm7t\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957550 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b309f61-8972-4f0c-b7e8-cfcea2909bf3-host\") pod \"node-ca-lhmfh\" (UID: \"9b309f61-8972-4f0c-b7e8-cfcea2909bf3\") " pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957567 2578 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957573 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-cni-bin\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957596 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047d5bff-4225-4e75-9651-615adfb54be2-ovnkube-config\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957621 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-socket-dir-parent\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957643 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-conf-dir\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957664 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-etc-kubernetes\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957683 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-sysctl-conf\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957706 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-device-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957731 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957766 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/faec62ed-4955-41ae-96c6-7fa5fab7f996-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957792 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-tls\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957825 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lmr\" (UniqueName: \"kubernetes.io/projected/9b309f61-8972-4f0c-b7e8-cfcea2909bf3-kube-api-access-z7lmr\") pod \"node-ca-lhmfh\" (UID: \"9b309f61-8972-4f0c-b7e8-cfcea2909bf3\") " pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:30.957871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957862 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-cnibin\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957887 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-etc-selinux\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957909 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-run-netns\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957927 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-run-openvswitch\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957942 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047d5bff-4225-4e75-9651-615adfb54be2-ovn-node-metrics-cert\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957957 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-system-cni-dir\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957978 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-slash\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.957999 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-etc-openvswitch\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958013 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-log-socket\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958028 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lj5c\" (UniqueName: \"kubernetes.io/projected/7c48d729-e644-4376-b836-4a516c44c4d6-kube-api-access-4lj5c\") pod \"node-resolver-7v56j\" (UID: \"7c48d729-e644-4376-b836-4a516c44c4d6\") " pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958048 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-cnibin\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958085 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-system-cni-dir\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958124 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958195 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7php\" (UniqueName: \"kubernetes.io/projected/047d5bff-4225-4e75-9651-615adfb54be2-kube-api-access-n7php\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958222 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-kubernetes\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958238 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:30.958429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958254 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-run-k8s-cni-cncf-io\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958270 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-var-lib-kubelet\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958291 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-kubelet-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958315 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-socket-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958335 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-kubelet\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958354 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-var-lib-cni-multus\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958372 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-modprobe-d\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958385 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f25169f2-3731-4f98-a3ff-cea42487c5e1-sys\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958409 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f25169f2-3731-4f98-a3ff-cea42487c5e1-metrics-client-ca\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958425 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-cni-dir\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958449 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9b309f61-8972-4f0c-b7e8-cfcea2909bf3-serviceca\") pod \"node-ca-lhmfh\" (UID: \"9b309f61-8972-4f0c-b7e8-cfcea2909bf3\") " pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958468 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7a2b69c8-6958-4cea-abaf-09cf2b9873e9-host-slash\") pod \"iptables-alerter-5h96f\" (UID: \"7a2b69c8-6958-4cea-abaf-09cf2b9873e9\") " pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958486 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6x7q\" (UniqueName: \"kubernetes.io/projected/7a2b69c8-6958-4cea-abaf-09cf2b9873e9-kube-api-access-s6x7q\") pod \"iptables-alerter-5h96f\" (UID: \"7a2b69c8-6958-4cea-abaf-09cf2b9873e9\") " pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958501 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c48d729-e644-4376-b836-4a516c44c4d6-tmp-dir\") pod \"node-resolver-7v56j\" (UID: \"7c48d729-e644-4376-b836-4a516c44c4d6\") " pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958515 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-run-netns\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958537 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-var-lib-kubelet\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.958993 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958558 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-accelerators-collector-config\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958572 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/faec62ed-4955-41ae-96c6-7fa5fab7f996-cni-binary-copy\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958587 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/7a2b69c8-6958-4cea-abaf-09cf2b9873e9-iptables-alerter-script\") pod \"iptables-alerter-5h96f\" (UID: \"7a2b69c8-6958-4cea-abaf-09cf2b9873e9\") " pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958602 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-systemd-units\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958636 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047d5bff-4225-4e75-9651-615adfb54be2-ovnkube-script-lib\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958658 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-var-lib-cni-bin\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958698 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-tuned\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958750 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-sys-fs\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958787 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zktmz\" (UniqueName: \"kubernetes.io/projected/faec62ed-4955-41ae-96c6-7fa5fab7f996-kube-api-access-zktmz\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958829 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-node-log\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958854 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047d5bff-4225-4e75-9651-615adfb54be2-env-overrides\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958878 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/af10d12d-291a-41fd-8854-3c5ffc4322a3-cni-binary-copy\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:30.959443 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.958902 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-sysctl-d\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:30.960765 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:30.960744 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/5e89d705-97ba-4bce-a2d2-d806b5547f4f-agent-certs\") pod \"konnectivity-agent-jk7f4\" (UID: \"5e89d705-97ba-4bce-a2d2-d806b5547f4f\") " pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:31.059492 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059466 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/faec62ed-4955-41ae-96c6-7fa5fab7f996-cni-binary-copy\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.059624 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059496 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/7a2b69c8-6958-4cea-abaf-09cf2b9873e9-iptables-alerter-script\") pod \"iptables-alerter-5h96f\" (UID: \"7a2b69c8-6958-4cea-abaf-09cf2b9873e9\") " pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:31.059624 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059517 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-systemd-units\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.059624 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059533 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047d5bff-4225-4e75-9651-615adfb54be2-ovnkube-script-lib\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.059823 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059608 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-systemd-units\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.059823 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059656 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-var-lib-cni-bin\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.059823 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059687 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-tuned\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.059823 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059717 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-sys-fs\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.059823 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059753 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-var-lib-cni-bin\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.059823 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059763 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zktmz\" (UniqueName: \"kubernetes.io/projected/faec62ed-4955-41ae-96c6-7fa5fab7f996-kube-api-access-zktmz\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.059823 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059792 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-node-log\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059829 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047d5bff-4225-4e75-9651-615adfb54be2-env-overrides\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059856 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/af10d12d-291a-41fd-8854-3c5ffc4322a3-cni-binary-copy\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059883 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-sysctl-d\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059908 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g9m8p\" (UniqueName: \"kubernetes.io/projected/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-kube-api-access-g9m8p\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059935 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/faec62ed-4955-41ae-96c6-7fa5fab7f996-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059946 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-node-log\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059960 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-var-lib-openvswitch\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.059984 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-run-ovn\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060012 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7c48d729-e644-4376-b836-4a516c44c4d6-hosts-file\") pod \"node-resolver-7v56j\" (UID: \"7c48d729-e644-4376-b836-4a516c44c4d6\") " pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060024 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-sys-fs\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060038 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-host\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060068 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-cni-netd\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060094 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjjl7\" (UniqueName: \"kubernetes.io/projected/af10d12d-291a-41fd-8854-3c5ffc4322a3-kube-api-access-fjjl7\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060119 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4baee54d-0178-40b3-b0ce-ba751c0fbd26-tmp\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.060146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060140 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/faec62ed-4955-41ae-96c6-7fa5fab7f996-cni-binary-copy\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060192 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047d5bff-4225-4e75-9651-615adfb54be2-ovnkube-script-lib\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060145 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-registration-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060241 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-textfile\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060247 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-run-ovn\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060139 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/7a2b69c8-6958-4cea-abaf-09cf2b9873e9-iptables-alerter-script\") pod \"iptables-alerter-5h96f\" (UID: \"7a2b69c8-6958-4cea-abaf-09cf2b9873e9\") " pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060211 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-registration-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060280 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cph2b\" (UniqueName: \"kubernetes.io/projected/f25169f2-3731-4f98-a3ff-cea42487c5e1-kube-api-access-cph2b\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060327 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-cni-netd\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060317 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-os-release\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060394 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-run-systemd\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060396 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-os-release\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060410 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7c48d729-e644-4376-b836-4a516c44c4d6-hosts-file\") pod \"node-resolver-7v56j\" (UID: \"7c48d729-e644-4376-b836-4a516c44c4d6\") " pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060458 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-run-systemd\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060488 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047d5bff-4225-4e75-9651-615adfb54be2-env-overrides\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060496 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060534 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-sysconfig\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.060889 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060562 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-wtmp\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060585 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f25169f2-3731-4f98-a3ff-cea42487c5e1-root\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.060592 2578 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060637 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-os-release\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.060679 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs podName:dda8f1f0-9635-43d2-9f82-9831f8800481 nodeName:}" failed. No retries permitted until 2026-04-24 16:45:31.560655262 +0000 UTC m=+15.157021549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs") pod "network-metrics-daemon-8jmlx" (UID: "dda8f1f0-9635-43d2-9f82-9831f8800481") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060699 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-hostroot\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060705 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-os-release\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060748 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-hostroot\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060793 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-sysconfig\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060833 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-host\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060852 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-sysctl-d\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060875 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/af10d12d-291a-41fd-8854-3c5ffc4322a3-cni-binary-copy\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060911 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-systemd\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060915 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-var-lib-openvswitch\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060912 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-wtmp\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060939 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f25169f2-3731-4f98-a3ff-cea42487c5e1-root\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060958 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-systemd\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.060955 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-sys\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.061650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061003 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-lib-modules\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061030 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061058 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/faec62ed-4955-41ae-96c6-7fa5fab7f996-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061068 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-run-ovn-kubernetes\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061090 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-sys\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061097 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-daemon-config\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061134 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-lib-modules\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061134 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-run-multus-certs\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061174 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-run-ovn-kubernetes\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061183 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4gxs5\" (UniqueName: \"kubernetes.io/projected/dda8f1f0-9635-43d2-9f82-9831f8800481-kube-api-access-4gxs5\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061190 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-textfile\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061209 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-run\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061220 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061272 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-run-multus-certs\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061279 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-run\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061308 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rm7t\" (UniqueName: \"kubernetes.io/projected/4baee54d-0178-40b3-b0ce-ba751c0fbd26-kube-api-access-5rm7t\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061333 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b309f61-8972-4f0c-b7e8-cfcea2909bf3-host\") pod \"node-ca-lhmfh\" (UID: \"9b309f61-8972-4f0c-b7e8-cfcea2909bf3\") " pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:31.062570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061358 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-cni-bin\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061383 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047d5bff-4225-4e75-9651-615adfb54be2-ovnkube-config\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061407 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-socket-dir-parent\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061423 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b309f61-8972-4f0c-b7e8-cfcea2909bf3-host\") pod \"node-ca-lhmfh\" (UID: \"9b309f61-8972-4f0c-b7e8-cfcea2909bf3\") " pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061432 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-conf-dir\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061450 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-cni-bin\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061458 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-etc-kubernetes\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061486 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-sysctl-conf\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061513 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-device-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061513 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-socket-dir-parent\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061549 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-etc-kubernetes\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061584 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061587 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-daemon-config\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061626 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-conf-dir\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061639 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-sysctl-conf\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061657 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/faec62ed-4955-41ae-96c6-7fa5fab7f996-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061670 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-device-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.063413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061685 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-tls\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061711 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7lmr\" (UniqueName: \"kubernetes.io/projected/9b309f61-8972-4f0c-b7e8-cfcea2909bf3-kube-api-access-z7lmr\") pod \"node-ca-lhmfh\" (UID: \"9b309f61-8972-4f0c-b7e8-cfcea2909bf3\") " pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061744 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-cnibin\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061769 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-etc-selinux\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061824 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-run-netns\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061850 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-run-openvswitch\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061875 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047d5bff-4225-4e75-9651-615adfb54be2-ovn-node-metrics-cert\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061901 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-system-cni-dir\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061928 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-slash\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061953 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-etc-openvswitch\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.061978 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-log-socket\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062003 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4lj5c\" (UniqueName: \"kubernetes.io/projected/7c48d729-e644-4376-b836-4a516c44c4d6-kube-api-access-4lj5c\") pod \"node-resolver-7v56j\" (UID: \"7c48d729-e644-4376-b836-4a516c44c4d6\") " pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062030 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-cnibin\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062043 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047d5bff-4225-4e75-9651-615adfb54be2-ovnkube-config\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062056 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-system-cni-dir\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062085 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062104 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/faec62ed-4955-41ae-96c6-7fa5fab7f996-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.064269 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062156 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-run-netns\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062161 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-run-openvswitch\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062201 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-cnibin\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062205 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-system-cni-dir\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062211 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-etc-selinux\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062237 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-slash\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062239 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-cnibin\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062271 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-etc-openvswitch\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062274 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062312 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-log-socket\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062347 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/faec62ed-4955-41ae-96c6-7fa5fab7f996-system-cni-dir\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062354 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7php\" (UniqueName: \"kubernetes.io/projected/047d5bff-4225-4e75-9651-615adfb54be2-kube-api-access-n7php\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062391 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-kubernetes\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062465 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062519 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-kubernetes\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062581 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-run-k8s-cni-cncf-io\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062611 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-run-k8s-cni-cncf-io\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.065086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062651 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-var-lib-kubelet\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062685 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-kubelet-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062696 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-var-lib-kubelet\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062712 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-socket-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062738 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-kubelet\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062747 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-kubelet-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062763 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-var-lib-cni-multus\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062788 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-modprobe-d\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062818 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047d5bff-4225-4e75-9651-615adfb54be2-host-kubelet\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062824 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f25169f2-3731-4f98-a3ff-cea42487c5e1-sys\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062860 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f25169f2-3731-4f98-a3ff-cea42487c5e1-metrics-client-ca\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062862 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-var-lib-cni-multus\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062862 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-socket-dir\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062887 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-cni-dir\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062907 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f25169f2-3731-4f98-a3ff-cea42487c5e1-sys\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062912 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9b309f61-8972-4f0c-b7e8-cfcea2909bf3-serviceca\") pod \"node-ca-lhmfh\" (UID: \"9b309f61-8972-4f0c-b7e8-cfcea2909bf3\") " pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062947 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-modprobe-d\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062991 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-multus-cni-dir\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.065948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.062996 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7a2b69c8-6958-4cea-abaf-09cf2b9873e9-host-slash\") pod \"iptables-alerter-5h96f\" (UID: \"7a2b69c8-6958-4cea-abaf-09cf2b9873e9\") " pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063027 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s6x7q\" (UniqueName: \"kubernetes.io/projected/7a2b69c8-6958-4cea-abaf-09cf2b9873e9-kube-api-access-s6x7q\") pod \"iptables-alerter-5h96f\" (UID: \"7a2b69c8-6958-4cea-abaf-09cf2b9873e9\") " pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063045 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7a2b69c8-6958-4cea-abaf-09cf2b9873e9-host-slash\") pod \"iptables-alerter-5h96f\" (UID: \"7a2b69c8-6958-4cea-abaf-09cf2b9873e9\") " pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063050 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c48d729-e644-4376-b836-4a516c44c4d6-tmp-dir\") pod \"node-resolver-7v56j\" (UID: \"7c48d729-e644-4376-b836-4a516c44c4d6\") " pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063075 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-run-netns\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063098 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-var-lib-kubelet\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063123 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-accelerators-collector-config\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063198 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af10d12d-291a-41fd-8854-3c5ffc4322a3-host-run-netns\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063270 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4baee54d-0178-40b3-b0ce-ba751c0fbd26-var-lib-kubelet\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063420 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4baee54d-0178-40b3-b0ce-ba751c0fbd26-tmp\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063512 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c48d729-e644-4376-b836-4a516c44c4d6-tmp-dir\") pod \"node-resolver-7v56j\" (UID: \"7c48d729-e644-4376-b836-4a516c44c4d6\") " pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063891 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4baee54d-0178-40b3-b0ce-ba751c0fbd26-etc-tuned\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.063982 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f25169f2-3731-4f98-a3ff-cea42487c5e1-metrics-client-ca\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.064029 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.064027 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9b309f61-8972-4f0c-b7e8-cfcea2909bf3-serviceca\") pod \"node-ca-lhmfh\" (UID: \"9b309f61-8972-4f0c-b7e8-cfcea2909bf3\") " pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.064137 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-accelerators-collector-config\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.064745 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f25169f2-3731-4f98-a3ff-cea42487c5e1-node-exporter-tls\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.066574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.066175 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047d5bff-4225-4e75-9651-615adfb54be2-ovn-node-metrics-cert\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.069401 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.069381 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 16:45:31.069508 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.069404 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 16:45:31.069508 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.069418 2578 projected.go:194] Error preparing data for projected volume kube-api-access-5m5bc for pod openshift-network-diagnostics/network-check-target-rcps7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:31.069508 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.069441 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gxs5\" (UniqueName: \"kubernetes.io/projected/dda8f1f0-9635-43d2-9f82-9831f8800481-kube-api-access-4gxs5\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:31.069508 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.069478 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc podName:c8f211dc-e214-4e02-b487-47c0952e8984 nodeName:}" failed. No retries permitted until 2026-04-24 16:45:31.569462119 +0000 UTC m=+15.165828402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5m5bc" (UniqueName: "kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc") pod "network-check-target-rcps7" (UID: "c8f211dc-e214-4e02-b487-47c0952e8984") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:31.069718 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.069574 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zktmz\" (UniqueName: \"kubernetes.io/projected/faec62ed-4955-41ae-96c6-7fa5fab7f996-kube-api-access-zktmz\") pod \"multus-additional-cni-plugins-nvmzv\" (UID: \"faec62ed-4955-41ae-96c6-7fa5fab7f996\") " pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.070029 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.070004 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cph2b\" (UniqueName: \"kubernetes.io/projected/f25169f2-3731-4f98-a3ff-cea42487c5e1-kube-api-access-cph2b\") pod \"node-exporter-flc22\" (UID: \"f25169f2-3731-4f98-a3ff-cea42487c5e1\") " pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.070441 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.070419 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjjl7\" (UniqueName: \"kubernetes.io/projected/af10d12d-291a-41fd-8854-3c5ffc4322a3-kube-api-access-fjjl7\") pod \"multus-fsp54\" (UID: \"af10d12d-291a-41fd-8854-3c5ffc4322a3\") " pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.071059 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.071008 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rm7t\" (UniqueName: \"kubernetes.io/projected/4baee54d-0178-40b3-b0ce-ba751c0fbd26-kube-api-access-5rm7t\") pod \"tuned-gfrd2\" (UID: \"4baee54d-0178-40b3-b0ce-ba751c0fbd26\") " pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.071186 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.071169 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9m8p\" (UniqueName: \"kubernetes.io/projected/5573b3f6-18e3-4f90-bcb4-46fbb336eae5-kube-api-access-g9m8p\") pod \"aws-ebs-csi-driver-node-9jr6j\" (UID: \"5573b3f6-18e3-4f90-bcb4-46fbb336eae5\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.072056 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.072039 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lj5c\" (UniqueName: \"kubernetes.io/projected/7c48d729-e644-4376-b836-4a516c44c4d6-kube-api-access-4lj5c\") pod \"node-resolver-7v56j\" (UID: \"7c48d729-e644-4376-b836-4a516c44c4d6\") " pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:31.072159 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.072143 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6x7q\" (UniqueName: \"kubernetes.io/projected/7a2b69c8-6958-4cea-abaf-09cf2b9873e9-kube-api-access-s6x7q\") pod \"iptables-alerter-5h96f\" (UID: \"7a2b69c8-6958-4cea-abaf-09cf2b9873e9\") " pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:31.072462 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.072443 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7php\" (UniqueName: \"kubernetes.io/projected/047d5bff-4225-4e75-9651-615adfb54be2-kube-api-access-n7php\") pod \"ovnkube-node-9d4p6\" (UID: \"047d5bff-4225-4e75-9651-615adfb54be2\") " pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.072508 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.072475 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7lmr\" (UniqueName: \"kubernetes.io/projected/9b309f61-8972-4f0c-b7e8-cfcea2909bf3-kube-api-access-z7lmr\") pod \"node-ca-lhmfh\" (UID: \"9b309f61-8972-4f0c-b7e8-cfcea2909bf3\") " pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:31.104466 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.104445 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:31.111386 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.111366 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" Apr 24 16:45:31.111523 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:31.111494 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e89d705_97ba_4bce_a2d2_d806b5547f4f.slice/crio-9aeb526b4043337e4892af53f417a8d1cf0e826f85ba0c7685d3d06440af27d3 WatchSource:0}: Error finding container 9aeb526b4043337e4892af53f417a8d1cf0e826f85ba0c7685d3d06440af27d3: Status 404 returned error can't find the container with id 9aeb526b4043337e4892af53f417a8d1cf0e826f85ba0c7685d3d06440af27d3 Apr 24 16:45:31.116863 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:31.116842 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5573b3f6_18e3_4f90_bcb4_46fbb336eae5.slice/crio-5beb6e792783a04f995355dde12558df995f7f8b71add5ededf71b374243de5a WatchSource:0}: Error finding container 5beb6e792783a04f995355dde12558df995f7f8b71add5ededf71b374243de5a: Status 404 returned error can't find the container with id 5beb6e792783a04f995355dde12558df995f7f8b71add5ededf71b374243de5a Apr 24 16:45:31.119823 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.119748 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7v56j" Apr 24 16:45:31.123794 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.123778 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-flc22" Apr 24 16:45:31.125567 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:31.125551 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c48d729_e644_4376_b836_4a516c44c4d6.slice/crio-7913d351af7eefa60ca9483da62440707cea0b474ecf3b956048fd54af4ca6e8 WatchSource:0}: Error finding container 7913d351af7eefa60ca9483da62440707cea0b474ecf3b956048fd54af4ca6e8: Status 404 returned error can't find the container with id 7913d351af7eefa60ca9483da62440707cea0b474ecf3b956048fd54af4ca6e8 Apr 24 16:45:31.130186 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:31.130168 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf25169f2_3731_4f98_a3ff_cea42487c5e1.slice/crio-89b3c1f31443c222b29bf552a328cb3f2bb0c0537c8d1588a9eef79e195d850b WatchSource:0}: Error finding container 89b3c1f31443c222b29bf552a328cb3f2bb0c0537c8d1588a9eef79e195d850b: Status 404 returned error can't find the container with id 89b3c1f31443c222b29bf552a328cb3f2bb0c0537c8d1588a9eef79e195d850b Apr 24 16:45:31.130991 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.130973 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fsp54" Apr 24 16:45:31.135794 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.135777 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" Apr 24 16:45:31.137129 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:31.137112 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf10d12d_291a_41fd_8854_3c5ffc4322a3.slice/crio-75b140244905464a5d3cb2aea4036c8e00ce0117424dcb81c69869b4342703c4 WatchSource:0}: Error finding container 75b140244905464a5d3cb2aea4036c8e00ce0117424dcb81c69869b4342703c4: Status 404 returned error can't find the container with id 75b140244905464a5d3cb2aea4036c8e00ce0117424dcb81c69869b4342703c4 Apr 24 16:45:31.141249 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.141232 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lhmfh" Apr 24 16:45:31.142520 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:31.142501 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4baee54d_0178_40b3_b0ce_ba751c0fbd26.slice/crio-717d8e327ac2d6b0a1c880882c49bbd0f7c8e58fdc576279842be7292950fd28 WatchSource:0}: Error finding container 717d8e327ac2d6b0a1c880882c49bbd0f7c8e58fdc576279842be7292950fd28: Status 404 returned error can't find the container with id 717d8e327ac2d6b0a1c880882c49bbd0f7c8e58fdc576279842be7292950fd28 Apr 24 16:45:31.148034 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.148005 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" Apr 24 16:45:31.149385 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:31.148522 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b309f61_8972_4f0c_b7e8_cfcea2909bf3.slice/crio-22b36935e2257d709eb86735aacfd4f153742e322b6243f44774a2706963ae21 WatchSource:0}: Error finding container 22b36935e2257d709eb86735aacfd4f153742e322b6243f44774a2706963ae21: Status 404 returned error can't find the container with id 22b36935e2257d709eb86735aacfd4f153742e322b6243f44774a2706963ae21 Apr 24 16:45:31.154286 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.154267 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5h96f" Apr 24 16:45:31.159224 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.159189 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:45:31.165769 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:45:31.165742 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod047d5bff_4225_4e75_9651_615adfb54be2.slice/crio-a23d9ac9516ecf9e86828b08206277db3626a618085d197b9a57d704f7e6f78c WatchSource:0}: Error finding container a23d9ac9516ecf9e86828b08206277db3626a618085d197b9a57d704f7e6f78c: Status 404 returned error can't find the container with id a23d9ac9516ecf9e86828b08206277db3626a618085d197b9a57d704f7e6f78c Apr 24 16:45:31.287074 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.287052 2578 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 16:45:31.567186 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.567154 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:31.567338 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.567321 2578 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:31.567404 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.567389 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs podName:dda8f1f0-9635-43d2-9f82-9831f8800481 nodeName:}" failed. No retries permitted until 2026-04-24 16:45:32.567369543 +0000 UTC m=+16.163735827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs") pod "network-metrics-daemon-8jmlx" (UID: "dda8f1f0-9635-43d2-9f82-9831f8800481") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:31.668547 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.668209 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:31.668547 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.668358 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 16:45:31.668547 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.668376 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 16:45:31.668547 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.668389 2578 projected.go:194] Error preparing data for projected volume kube-api-access-5m5bc for pod openshift-network-diagnostics/network-check-target-rcps7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:31.668547 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:31.668445 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc podName:c8f211dc-e214-4e02-b487-47c0952e8984 nodeName:}" failed. No retries permitted until 2026-04-24 16:45:32.668425835 +0000 UTC m=+16.264792117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m5bc" (UniqueName: "kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc") pod "network-check-target-rcps7" (UID: "c8f211dc-e214-4e02-b487-47c0952e8984") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:31.853999 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.853959 2578 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-23 16:40:30 +0000 UTC" deadline="2027-11-17 07:39:48.477844409 +0000 UTC" Apr 24 16:45:31.853999 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.853996 2578 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13718h54m16.623852172s" Apr 24 16:45:31.964997 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.964917 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" event={"ID":"047d5bff-4225-4e75-9651-615adfb54be2","Type":"ContainerStarted","Data":"a23d9ac9516ecf9e86828b08206277db3626a618085d197b9a57d704f7e6f78c"} Apr 24 16:45:31.973682 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.973651 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5h96f" event={"ID":"7a2b69c8-6958-4cea-abaf-09cf2b9873e9","Type":"ContainerStarted","Data":"326c46667767180de804bc59114d4529ce70de4a4ef8e3bf2e53668d0a4cb7c7"} Apr 24 16:45:31.977383 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.977357 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lhmfh" event={"ID":"9b309f61-8972-4f0c-b7e8-cfcea2909bf3","Type":"ContainerStarted","Data":"22b36935e2257d709eb86735aacfd4f153742e322b6243f44774a2706963ae21"} Apr 24 16:45:31.980615 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.980583 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7v56j" event={"ID":"7c48d729-e644-4376-b836-4a516c44c4d6","Type":"ContainerStarted","Data":"7913d351af7eefa60ca9483da62440707cea0b474ecf3b956048fd54af4ca6e8"} Apr 24 16:45:31.982994 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.982918 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-jk7f4" event={"ID":"5e89d705-97ba-4bce-a2d2-d806b5547f4f","Type":"ContainerStarted","Data":"9aeb526b4043337e4892af53f417a8d1cf0e826f85ba0c7685d3d06440af27d3"} Apr 24 16:45:31.993555 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.992995 2578 generic.go:358] "Generic (PLEG): container finished" podID="684f5b9427f067db6bbddee483185b81" containerID="6fad5a9e73b1b6b7e8b5ca84d48e5f54753c3a45cb802f5a57537a127507b6bc" exitCode=0 Apr 24 16:45:31.993555 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.993261 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" event={"ID":"684f5b9427f067db6bbddee483185b81","Type":"ContainerDied","Data":"6fad5a9e73b1b6b7e8b5ca84d48e5f54753c3a45cb802f5a57537a127507b6bc"} Apr 24 16:45:32.000003 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:31.999978 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" event={"ID":"faec62ed-4955-41ae-96c6-7fa5fab7f996","Type":"ContainerStarted","Data":"1d89f3510598ec1af7f4ccd95a8bd5e0d040e3a588b39bd85ba678a13ed6b7c1"} Apr 24 16:45:32.005869 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.005846 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" event={"ID":"4baee54d-0178-40b3-b0ce-ba751c0fbd26","Type":"ContainerStarted","Data":"717d8e327ac2d6b0a1c880882c49bbd0f7c8e58fdc576279842be7292950fd28"} Apr 24 16:45:32.010350 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.010296 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-204.ec2.internal" podStartSLOduration=2.010281375 podStartE2EDuration="2.010281375s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 16:45:31.095260049 +0000 UTC m=+14.691626335" watchObservedRunningTime="2026-04-24 16:45:32.010281375 +0000 UTC m=+15.606647662" Apr 24 16:45:32.011656 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.011633 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsp54" event={"ID":"af10d12d-291a-41fd-8854-3c5ffc4322a3","Type":"ContainerStarted","Data":"75b140244905464a5d3cb2aea4036c8e00ce0117424dcb81c69869b4342703c4"} Apr 24 16:45:32.026959 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.026932 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-flc22" event={"ID":"f25169f2-3731-4f98-a3ff-cea42487c5e1","Type":"ContainerStarted","Data":"89b3c1f31443c222b29bf552a328cb3f2bb0c0537c8d1588a9eef79e195d850b"} Apr 24 16:45:32.032718 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.032694 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" event={"ID":"5573b3f6-18e3-4f90-bcb4-46fbb336eae5","Type":"ContainerStarted","Data":"5beb6e792783a04f995355dde12558df995f7f8b71add5ededf71b374243de5a"} Apr 24 16:45:32.575787 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.575721 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:32.575941 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:32.575903 2578 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:32.576003 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:32.575967 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs podName:dda8f1f0-9635-43d2-9f82-9831f8800481 nodeName:}" failed. No retries permitted until 2026-04-24 16:45:34.575940505 +0000 UTC m=+18.172306773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs") pod "network-metrics-daemon-8jmlx" (UID: "dda8f1f0-9635-43d2-9f82-9831f8800481") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:32.676735 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.676705 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:32.676880 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:32.676856 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 16:45:32.676880 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:32.676874 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 16:45:32.677072 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:32.676887 2578 projected.go:194] Error preparing data for projected volume kube-api-access-5m5bc for pod openshift-network-diagnostics/network-check-target-rcps7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:32.677072 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:32.676941 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc podName:c8f211dc-e214-4e02-b487-47c0952e8984 nodeName:}" failed. No retries permitted until 2026-04-24 16:45:34.676921237 +0000 UTC m=+18.273287506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m5bc" (UniqueName: "kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc") pod "network-check-target-rcps7" (UID: "c8f211dc-e214-4e02-b487-47c0952e8984") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:32.855228 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.855142 2578 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-23 16:40:30 +0000 UTC" deadline="2027-12-20 13:05:49.358842274 +0000 UTC" Apr 24 16:45:32.855228 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.855179 2578 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14516h20m16.503667164s" Apr 24 16:45:32.919149 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.919120 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:32.920041 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:32.919254 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:32.920041 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:32.919927 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:32.920041 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:32.920026 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:33.053736 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:33.053703 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" event={"ID":"684f5b9427f067db6bbddee483185b81","Type":"ContainerStarted","Data":"4b3e13749edc50745b9f30911b3ce5cffe7807354c5eb4fde05d2f72d1ca655c"} Apr 24 16:45:33.060922 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:33.060768 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-flc22" event={"ID":"f25169f2-3731-4f98-a3ff-cea42487c5e1","Type":"ContainerStarted","Data":"fde7081e1381037ff669f4507424f2dad7f21a509053da24f7a6531e28096f70"} Apr 24 16:45:33.098212 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:33.098151 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-204.ec2.internal" podStartSLOduration=3.098132899 podStartE2EDuration="3.098132899s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 16:45:33.071783463 +0000 UTC m=+16.668149744" watchObservedRunningTime="2026-04-24 16:45:33.098132899 +0000 UTC m=+16.694499185" Apr 24 16:45:34.065834 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:34.065655 2578 generic.go:358] "Generic (PLEG): container finished" podID="f25169f2-3731-4f98-a3ff-cea42487c5e1" containerID="fde7081e1381037ff669f4507424f2dad7f21a509053da24f7a6531e28096f70" exitCode=0 Apr 24 16:45:34.066699 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:34.066364 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-flc22" event={"ID":"f25169f2-3731-4f98-a3ff-cea42487c5e1","Type":"ContainerDied","Data":"fde7081e1381037ff669f4507424f2dad7f21a509053da24f7a6531e28096f70"} Apr 24 16:45:34.595362 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:34.595324 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:34.595548 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:34.595464 2578 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:34.595548 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:34.595533 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs podName:dda8f1f0-9635-43d2-9f82-9831f8800481 nodeName:}" failed. No retries permitted until 2026-04-24 16:45:38.595513789 +0000 UTC m=+22.191880058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs") pod "network-metrics-daemon-8jmlx" (UID: "dda8f1f0-9635-43d2-9f82-9831f8800481") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:34.696693 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:34.696653 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:34.696867 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:34.696843 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 16:45:34.696867 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:34.696862 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 16:45:34.696968 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:34.696874 2578 projected.go:194] Error preparing data for projected volume kube-api-access-5m5bc for pod openshift-network-diagnostics/network-check-target-rcps7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:34.696968 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:34.696930 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc podName:c8f211dc-e214-4e02-b487-47c0952e8984 nodeName:}" failed. No retries permitted until 2026-04-24 16:45:38.696911402 +0000 UTC m=+22.293277681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m5bc" (UniqueName: "kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc") pod "network-check-target-rcps7" (UID: "c8f211dc-e214-4e02-b487-47c0952e8984") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:34.916769 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:34.916735 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:34.916964 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:34.916900 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:34.916964 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:34.916953 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:34.917102 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:34.917051 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:36.918903 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:36.918865 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:36.919352 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:36.918968 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:36.919352 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:36.919029 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:36.919352 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:36.919107 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:38.628404 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:38.628372 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:38.628873 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:38.628509 2578 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:38.628873 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:38.628554 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs podName:dda8f1f0-9635-43d2-9f82-9831f8800481 nodeName:}" failed. No retries permitted until 2026-04-24 16:45:46.62854123 +0000 UTC m=+30.224907492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs") pod "network-metrics-daemon-8jmlx" (UID: "dda8f1f0-9635-43d2-9f82-9831f8800481") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:38.729778 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:38.729739 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:38.729939 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:38.729909 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 16:45:38.729939 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:38.729927 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 16:45:38.729939 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:38.729939 2578 projected.go:194] Error preparing data for projected volume kube-api-access-5m5bc for pod openshift-network-diagnostics/network-check-target-rcps7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:38.730063 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:38.730002 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc podName:c8f211dc-e214-4e02-b487-47c0952e8984 nodeName:}" failed. No retries permitted until 2026-04-24 16:45:46.729981665 +0000 UTC m=+30.326347946 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m5bc" (UniqueName: "kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc") pod "network-check-target-rcps7" (UID: "c8f211dc-e214-4e02-b487-47c0952e8984") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:38.917401 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:38.917372 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:38.917558 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:38.917410 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:38.917558 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:38.917504 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:38.917684 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:38.917638 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:40.916957 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:40.916697 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:40.917563 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:40.916701 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:40.917563 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:40.917027 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:40.917563 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:40.917099 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:41.079528 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.079501 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-flc22" event={"ID":"f25169f2-3731-4f98-a3ff-cea42487c5e1","Type":"ContainerStarted","Data":"e2065d47378257711f56c0872e22420fbd7db6d8b8e77577c7e53489fac97947"} Apr 24 16:45:41.079662 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.079536 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-flc22" event={"ID":"f25169f2-3731-4f98-a3ff-cea42487c5e1","Type":"ContainerStarted","Data":"88a566440a36ccfd18371a6f5da137b9fdd167a126b2fd04546518759ff9f760"} Apr 24 16:45:41.081063 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.081031 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" event={"ID":"5573b3f6-18e3-4f90-bcb4-46fbb336eae5","Type":"ContainerStarted","Data":"f7d67a498e57fabbf2de079175603dc3cd3d3a3b44d740e58b05ceb62733f937"} Apr 24 16:45:41.082354 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.082329 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lhmfh" event={"ID":"9b309f61-8972-4f0c-b7e8-cfcea2909bf3","Type":"ContainerStarted","Data":"8595d14b0eca71a961c8119ac8a2afa0d19dc2e53b1ea5bf1b1a0401b7408bac"} Apr 24 16:45:41.083690 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.083659 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7v56j" event={"ID":"7c48d729-e644-4376-b836-4a516c44c4d6","Type":"ContainerStarted","Data":"90d0e890a48e57626af623aa7a50306303c086cd3ddbd56677b6ae011e75b151"} Apr 24 16:45:41.085085 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.085061 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-jk7f4" event={"ID":"5e89d705-97ba-4bce-a2d2-d806b5547f4f","Type":"ContainerStarted","Data":"0e1b99cfccca3c5a9a2697dca7574232ffbf65a1a0e7a5b266f472c7880b8c40"} Apr 24 16:45:41.086505 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.086478 2578 generic.go:358] "Generic (PLEG): container finished" podID="faec62ed-4955-41ae-96c6-7fa5fab7f996" containerID="350f0e213567928e286c6c48b9206479a003a8b4ab2bcd590e649f69d5e8e560" exitCode=0 Apr 24 16:45:41.086611 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.086528 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" event={"ID":"faec62ed-4955-41ae-96c6-7fa5fab7f996","Type":"ContainerDied","Data":"350f0e213567928e286c6c48b9206479a003a8b4ab2bcd590e649f69d5e8e560"} Apr 24 16:45:41.087998 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.087982 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" event={"ID":"4baee54d-0178-40b3-b0ce-ba751c0fbd26","Type":"ContainerStarted","Data":"77105a5e555ae1a36091c69ab0f01d99fda372f22d98ec1408ec87c1b7603ddc"} Apr 24 16:45:41.105373 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.105355 2578 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:41.105992 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:41.105977 2578 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:42.091565 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:42.091525 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5h96f" event={"ID":"7a2b69c8-6958-4cea-abaf-09cf2b9873e9","Type":"ContainerStarted","Data":"b2765fda918fd6477a047c8b506b32a2d90c5e14047826a26141f037482f5d60"} Apr 24 16:45:42.916151 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:42.916127 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:42.916286 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:42.916128 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:42.916286 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:42.916224 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:42.916358 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:42.916293 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:43.093335 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:43.093308 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 16:45:44.916350 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:44.916144 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:44.916734 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:44.916155 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:44.916734 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:44.916451 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:44.916734 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:44.916512 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:46.679077 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:46.679033 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:46.679496 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:46.679178 2578 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:46.679496 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:46.679234 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs podName:dda8f1f0-9635-43d2-9f82-9831f8800481 nodeName:}" failed. No retries permitted until 2026-04-24 16:46:02.679221488 +0000 UTC m=+46.275587751 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs") pod "network-metrics-daemon-8jmlx" (UID: "dda8f1f0-9635-43d2-9f82-9831f8800481") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:45:46.779435 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:46.779400 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:46.779596 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:46.779575 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 16:45:46.779637 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:46.779600 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 16:45:46.779637 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:46.779610 2578 projected.go:194] Error preparing data for projected volume kube-api-access-5m5bc for pod openshift-network-diagnostics/network-check-target-rcps7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:46.779700 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:46.779671 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc podName:c8f211dc-e214-4e02-b487-47c0952e8984 nodeName:}" failed. No retries permitted until 2026-04-24 16:46:02.779646373 +0000 UTC m=+46.376012636 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m5bc" (UniqueName: "kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc") pod "network-check-target-rcps7" (UID: "c8f211dc-e214-4e02-b487-47c0952e8984") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:45:46.916701 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:46.916667 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:46.916848 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:46.916706 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:46.916848 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:46.916778 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:46.916967 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:46.916851 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:48.916086 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:48.916055 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:48.916454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:48.916060 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:48.916454 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:48.916159 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:48.916454 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:48.916241 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:49.884582 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:49.884541 2578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded" interval="200ms" Apr 24 16:45:50.168455 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:50.168379 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:45:40Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:45:40Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:45:40Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:45:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ip-10-0-129-204.ec2.internal\": Patch \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:45:50.916894 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:50.916864 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:50.917032 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:50.916864 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:50.917032 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:50.916970 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:50.917138 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:50.917033 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:52.916877 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:52.916835 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:52.916877 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:52.916878 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:52.917434 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:52.917005 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:52.917434 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:52.917116 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:54.916421 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:54.916387 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:54.916833 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:54.916388 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:54.916833 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:54.916503 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:54.916833 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:54.916561 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:55.641132 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:55.641103 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:45:55.641383 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:55.641234 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 16:45:55.641614 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:55.641563 2578 prober.go:120] "Probe failed" probeType="Readiness" pod="kube-system/konnectivity-agent-jk7f4" podUID="5e89d705-97ba-4bce-a2d2-d806b5547f4f" containerName="konnectivity-agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 16:45:55.642103 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:55.642077 2578 prober.go:120] "Probe failed" probeType="Readiness" pod="kube-system/konnectivity-agent-jk7f4" podUID="5e89d705-97ba-4bce-a2d2-d806b5547f4f" containerName="konnectivity-agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 16:45:56.916278 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:56.916247 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:56.916697 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:56.916247 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:56.916697 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:56.916342 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:56.916697 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:56.916426 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:45:58.916335 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:58.916306 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:45:58.916724 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:45:58.916306 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:45:58.916724 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:58.916456 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:45:58.916724 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:45:58.916535 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:00.085220 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:00.085180 2578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Apr 24 16:46:00.168767 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:00.168735 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-129-204.ec2.internal\": Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded" Apr 24 16:46:00.916566 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:00.916534 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:00.916862 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:00.916671 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:00.916862 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:00.916718 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:00.916862 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:00.916823 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:01.553018 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:01.552994 2578 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 24 16:46:01.873550 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:01.873461 2578 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-24T16:46:01.553012657Z","UUID":"faf675f2-42b8-431e-8577-c3d271bcf32a","Handler":null,"Name":"","Endpoint":""} Apr 24 16:46:01.875759 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:01.875726 2578 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 24 16:46:01.875759 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:01.875757 2578 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 24 16:46:02.127778 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:02.127706 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" event={"ID":"5573b3f6-18e3-4f90-bcb4-46fbb336eae5","Type":"ContainerStarted","Data":"7b5bbaf556e3bcc9a8aa71b2460357a526c745f71dfce71940c20b88d905a02e"} Apr 24 16:46:02.771220 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:02.771175 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:02.771681 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:02.771277 2578 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:46:02.771681 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:02.771359 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs podName:dda8f1f0-9635-43d2-9f82-9831f8800481 nodeName:}" failed. No retries permitted until 2026-04-24 16:46:34.771339168 +0000 UTC m=+78.367705457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs") pod "network-metrics-daemon-8jmlx" (UID: "dda8f1f0-9635-43d2-9f82-9831f8800481") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:46:02.872254 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:02.872223 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:02.872425 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:02.872403 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 16:46:02.872488 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:02.872432 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 16:46:02.872488 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:02.872447 2578 projected.go:194] Error preparing data for projected volume kube-api-access-5m5bc for pod openshift-network-diagnostics/network-check-target-rcps7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:46:02.872587 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:02.872514 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc podName:c8f211dc-e214-4e02-b487-47c0952e8984 nodeName:}" failed. No retries permitted until 2026-04-24 16:46:34.872496081 +0000 UTC m=+78.468862349 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m5bc" (UniqueName: "kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc") pod "network-check-target-rcps7" (UID: "c8f211dc-e214-4e02-b487-47c0952e8984") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:46:02.916630 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:02.916589 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:02.916767 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:02.916590 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:02.916767 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:02.916717 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:02.916900 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:02.916830 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:04.916785 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:04.916757 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:04.917122 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:04.916796 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:04.917122 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:04.916885 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:04.917122 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:04.916951 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:05.134487 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:05.134465 2578 generic.go:358] "Generic (PLEG): container finished" podID="faec62ed-4955-41ae-96c6-7fa5fab7f996" containerID="3c747e3a32ab816670a7d3b30b9a50e57827fe9b51c1cf1bd1ad39e6f5ac67a9" exitCode=0 Apr 24 16:46:05.134589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:05.134553 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" event={"ID":"faec62ed-4955-41ae-96c6-7fa5fab7f996","Type":"ContainerDied","Data":"3c747e3a32ab816670a7d3b30b9a50e57827fe9b51c1cf1bd1ad39e6f5ac67a9"} Apr 24 16:46:05.136402 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:05.136382 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" event={"ID":"5573b3f6-18e3-4f90-bcb4-46fbb336eae5","Type":"ContainerStarted","Data":"293c982da15fcfe10fe97c994bc66229005ab3855d7d75b1cf12fd9b4c19b911"} Apr 24 16:46:06.534027 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:06.533649 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-flc22" podStartSLOduration=35.282360679 podStartE2EDuration="36.533631946s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:45:31.131832266 +0000 UTC m=+14.728198531" lastFinishedPulling="2026-04-24 16:45:32.38310353 +0000 UTC m=+15.979469798" observedRunningTime="2026-04-24 16:46:06.533213526 +0000 UTC m=+50.129579826" watchObservedRunningTime="2026-04-24 16:46:06.533631946 +0000 UTC m=+50.129998234" Apr 24 16:46:06.580214 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:06.580161 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9jr6j" podStartSLOduration=3.139071879 podStartE2EDuration="36.580143015s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:45:31.118343166 +0000 UTC m=+14.714709428" lastFinishedPulling="2026-04-24 16:46:04.559414288 +0000 UTC m=+48.155780564" observedRunningTime="2026-04-24 16:46:06.579586408 +0000 UTC m=+50.175952694" watchObservedRunningTime="2026-04-24 16:46:06.580143015 +0000 UTC m=+50.176509302" Apr 24 16:46:06.630906 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:06.630855 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-gfrd2" podStartSLOduration=27.683862469 podStartE2EDuration="36.630840216s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:45:31.145594149 +0000 UTC m=+14.741960415" lastFinishedPulling="2026-04-24 16:45:40.092571897 +0000 UTC m=+23.688938162" observedRunningTime="2026-04-24 16:46:06.63072733 +0000 UTC m=+50.227093617" watchObservedRunningTime="2026-04-24 16:46:06.630840216 +0000 UTC m=+50.227206533" Apr 24 16:46:06.703116 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:06.703043 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-jk7f4" podStartSLOduration=27.78191573 podStartE2EDuration="36.703028169s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:45:31.114022006 +0000 UTC m=+14.710388280" lastFinishedPulling="2026-04-24 16:45:40.035134441 +0000 UTC m=+23.631500719" observedRunningTime="2026-04-24 16:46:06.702595969 +0000 UTC m=+50.298962267" watchObservedRunningTime="2026-04-24 16:46:06.703028169 +0000 UTC m=+50.299394454" Apr 24 16:46:06.737528 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:06.737479 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7v56j" podStartSLOduration=27.829453785 podStartE2EDuration="36.737461304s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:45:31.127046143 +0000 UTC m=+14.723412405" lastFinishedPulling="2026-04-24 16:45:40.035053648 +0000 UTC m=+23.631419924" observedRunningTime="2026-04-24 16:46:06.737289858 +0000 UTC m=+50.333656144" watchObservedRunningTime="2026-04-24 16:46:06.737461304 +0000 UTC m=+50.333827590" Apr 24 16:46:06.760428 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:06.760293 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lhmfh" podStartSLOduration=27.871386086 podStartE2EDuration="36.760280968s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:45:31.152204544 +0000 UTC m=+14.748570808" lastFinishedPulling="2026-04-24 16:45:40.041099425 +0000 UTC m=+23.637465690" observedRunningTime="2026-04-24 16:46:06.759603559 +0000 UTC m=+50.355969845" watchObservedRunningTime="2026-04-24 16:46:06.760280968 +0000 UTC m=+50.356647255" Apr 24 16:46:06.779627 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:06.779581 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-5h96f" podStartSLOduration=27.905275977 podStartE2EDuration="36.779566584s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:45:31.165228123 +0000 UTC m=+14.761594386" lastFinishedPulling="2026-04-24 16:45:40.039518717 +0000 UTC m=+23.635884993" observedRunningTime="2026-04-24 16:46:06.779248059 +0000 UTC m=+50.375614347" watchObservedRunningTime="2026-04-24 16:46:06.779566584 +0000 UTC m=+50.375932870" Apr 24 16:46:06.917678 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:06.917651 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:06.917855 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:06.917745 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:06.917855 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:06.917803 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:06.917855 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:06.917847 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:08.916435 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:08.916404 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:08.916890 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:08.916533 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:08.916890 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:08.916570 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:08.916890 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:08.916715 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:10.916061 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:10.915979 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:10.916498 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:10.915991 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:10.916498 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:10.916108 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:10.916498 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:10.916180 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:12.916958 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:12.916915 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:12.916958 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:12.916955 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:12.917420 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:12.917043 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:12.917420 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:12.917195 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:14.153202 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.153033 2578 generic.go:358] "Generic (PLEG): container finished" podID="faec62ed-4955-41ae-96c6-7fa5fab7f996" containerID="ba12da7fabc78e60b52ecbc32b2f2edef483502ad1371741231a82e4ac32a497" exitCode=0 Apr 24 16:46:14.153955 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.153103 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" event={"ID":"faec62ed-4955-41ae-96c6-7fa5fab7f996","Type":"ContainerDied","Data":"ba12da7fabc78e60b52ecbc32b2f2edef483502ad1371741231a82e4ac32a497"} Apr 24 16:46:14.154689 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.154662 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsp54" event={"ID":"af10d12d-291a-41fd-8854-3c5ffc4322a3","Type":"ContainerStarted","Data":"50850737b0e6e8bd1232863aa735d547cd2a5098c456ae1291989295197b7a1a"} Apr 24 16:46:14.157471 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.157449 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" event={"ID":"047d5bff-4225-4e75-9651-615adfb54be2","Type":"ContainerStarted","Data":"d8c119057f034dbc8d5c2c8fe48bc645a0e89eabcd8b42bb842798054d2a0d3b"} Apr 24 16:46:14.157538 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.157476 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" event={"ID":"047d5bff-4225-4e75-9651-615adfb54be2","Type":"ContainerStarted","Data":"c8c3a19bef2196d0004ee0bae78db034e2b7c2c26aaf6dc5b3bfe64c447927a5"} Apr 24 16:46:14.157538 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.157486 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" event={"ID":"047d5bff-4225-4e75-9651-615adfb54be2","Type":"ContainerStarted","Data":"0e28b796161b154e2472e658a0e69a64687be3f6bcc2f02045437862e491b134"} Apr 24 16:46:14.157538 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.157494 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" event={"ID":"047d5bff-4225-4e75-9651-615adfb54be2","Type":"ContainerStarted","Data":"f96465f71cce1ae75927bb699559859d11b6df0188bff92829365e0162cac837"} Apr 24 16:46:14.157538 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.157502 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" event={"ID":"047d5bff-4225-4e75-9651-615adfb54be2","Type":"ContainerStarted","Data":"2867476c7d05539a46e0e16455cbc913ebe46d76401c3389364d1296f6057a5e"} Apr 24 16:46:14.157538 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.157510 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" event={"ID":"047d5bff-4225-4e75-9651-615adfb54be2","Type":"ContainerStarted","Data":"e2459ea536639beb3eae686171e450ce20c8c7849f0d2915ed737b90aa5ea797"} Apr 24 16:46:14.916065 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.916034 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:14.916218 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:14.916078 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:14.916218 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:14.916155 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:14.916218 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:14.916211 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:16.916718 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:16.916557 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:16.917153 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:16.916581 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:16.917153 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:16.916786 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:16.917153 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:16.916910 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:17.164922 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:17.164889 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" event={"ID":"047d5bff-4225-4e75-9651-615adfb54be2","Type":"ContainerStarted","Data":"96c01eab6ee6c41c018298751a06d59f827d3110361b152ea1f87e5d0b733da8"} Apr 24 16:46:18.916260 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:18.916226 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:18.916670 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:18.916226 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:18.916670 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:18.916352 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:18.916670 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:18.916412 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:19.172476 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:19.172405 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" event={"ID":"047d5bff-4225-4e75-9651-615adfb54be2","Type":"ContainerStarted","Data":"0317b0d80c591c552e5a9fb16ae96caf4aa32cab16f65a16556bfd9350322063"} Apr 24 16:46:19.172766 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:19.172739 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:46:19.172878 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:19.172770 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:46:19.172878 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:19.172783 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:46:19.189848 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:19.189797 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:46:19.189940 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:19.189893 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:46:20.916711 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:20.916661 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:20.917104 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:20.916672 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:20.917104 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:20.916781 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:20.917104 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:20.916838 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:22.916759 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:22.916729 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:22.917165 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:22.916736 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:22.917165 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:22.916873 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:22.917165 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:22.916896 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:24.916382 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:24.916279 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:24.916382 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:24.916281 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:24.916382 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:24.916370 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:24.916864 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:24.916456 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:25.642410 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:25.642378 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-jk7f4" Apr 24 16:46:26.753010 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:26.752974 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded" Apr 24 16:46:26.916030 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:26.916002 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:26.916166 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:26.916028 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:26.916166 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:26.916127 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:26.916283 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:26.916218 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:28.916440 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:28.916407 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:28.916976 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:28.916417 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:28.916976 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:28.916512 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:28.916976 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:28.916679 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:30.916710 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:30.916682 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:30.917148 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:30.916837 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:30.917148 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:30.916850 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:30.917148 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:30.916953 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:32.197946 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:32.197670 2578 generic.go:358] "Generic (PLEG): container finished" podID="faec62ed-4955-41ae-96c6-7fa5fab7f996" containerID="8cf48474d2ac7cbf7955a2e69ff7f46a54e430db5b1e24557c2ef7861f65d0d5" exitCode=0 Apr 24 16:46:32.197946 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:32.197752 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" event={"ID":"faec62ed-4955-41ae-96c6-7fa5fab7f996","Type":"ContainerDied","Data":"8cf48474d2ac7cbf7955a2e69ff7f46a54e430db5b1e24557c2ef7861f65d0d5"} Apr 24 16:46:32.916376 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:32.916344 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:32.916538 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:32.916484 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:32.916538 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:32.916524 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:32.916624 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:32.916598 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:34.803364 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:34.803327 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:34.803857 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:34.803503 2578 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:46:34.803857 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:34.803587 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs podName:dda8f1f0-9635-43d2-9f82-9831f8800481 nodeName:}" failed. No retries permitted until 2026-04-24 16:47:38.803567455 +0000 UTC m=+142.399933724 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs") pod "network-metrics-daemon-8jmlx" (UID: "dda8f1f0-9635-43d2-9f82-9831f8800481") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 16:46:34.904009 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:34.903975 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:34.904173 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:34.904094 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 16:46:34.904173 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:34.904114 2578 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 16:46:34.904173 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:34.904127 2578 projected.go:194] Error preparing data for projected volume kube-api-access-5m5bc for pod openshift-network-diagnostics/network-check-target-rcps7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:46:34.904288 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:34.904180 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc podName:c8f211dc-e214-4e02-b487-47c0952e8984 nodeName:}" failed. No retries permitted until 2026-04-24 16:47:38.904164823 +0000 UTC m=+142.500531086 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m5bc" (UniqueName: "kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc") pod "network-check-target-rcps7" (UID: "c8f211dc-e214-4e02-b487-47c0952e8984") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 16:46:34.915966 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:34.915941 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:34.915966 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:34.915951 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:34.916144 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:34.916059 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-rcps7" podUID="c8f211dc-e214-4e02-b487-47c0952e8984" Apr 24 16:46:34.916202 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:34.916163 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jmlx" podUID="dda8f1f0-9635-43d2-9f82-9831f8800481" Apr 24 16:46:36.753513 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:36.753471 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded" Apr 24 16:46:36.916951 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:36.916919 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:46:36.917107 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:36.916919 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:46:36.938507 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:36.938409 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:46:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:46:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:46:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:46:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21aab62140b42b6dc9b5c8143084d89ee3e938eba8811eb0479fc2b6ad6bbd6e\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0\\\"],\\\"sizeBytes\\\":1592330346},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da4bec2f08680a3155ddcbb96f8594244976dae6fc08fc0f5878c4b0a456b92e\\\"],\\\"sizeBytes\\\":1267137864},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ff861a4f4064f34ed8215c549b58ea833762ff00985f897190743095344c8b2\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1a64699b0d35f7d206a46217f6b854077ea5e4524b566ded00c64cc85d4c1be\\\"],\\\"sizeBytes\\\":1065600018},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99acb485c40736a41dca54d0a983d561e9f0cd87b0a3256d1e5ce0e0d45174b6\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1fc1fcb9645517ab568f2e99b25ded04cfb3971b75bf72188b75347d2808c7b\\\"],\\\"sizeBytes\\\":1065006420},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77f7e33000484395db2216d431d7f91158cbb0ddee564b65288f4ec47e3188b8\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:786bf1f34d3636f95860ebe748f9dc62b84102c612a5b21ae6750c52e9eea253\\\"],\\\"sizeBytes\\\":727300480},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96316433550661db3ef74c1200d3edc0ec9d0b87f2b41589aa7b5e841b6660e3\\\"],\\\"sizeBytes\\\":701151772},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24a4540aecd65dc2af9b2023150dfb2d385169654f781efe70df51c623076d78\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8\\\"],\\\"sizeBytes\\\":534708291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82dc461ff286831f7476efc8de45fd918b894d4a80d9c285e9a9141fe43b993b\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7e28d9938f33314269d8079649373b4befb36895b8636c4c2ec3c0fee47c91c\\\"],\\\"sizeBytes\\\":533474192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abb4d487159fbc9b7148c690cd3a6ee638680f8f879ff6195ca1be5b393705b0\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c98201142213b52a3c1909f45800b5974157672377ecb8c102621ef164337008\\\"],\\\"sizeBytes\\\":514965743},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d97bd10b7c241845d0ed15e34f8d45e82126c1f184316dea148ffabc1cd670a\\\"],\\\"sizeBytes\\\":488332864},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e5d90e04210b2195777322c3270bbeb4397c72a84b5945ccccbb258ed770fb\\\"],\\\"sizeBytes\\\":480736321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9b333f47df911dd05a9a10cb8ea988e97d3174490b348b8e46a161d9581d64\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:38b41ae697f031205813679347380d7f258be2a57902ad4494285782a241086b\\\"],\\\"sizeBytes\\\":474198918},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c1b871a1e7148de8d1101e925186df33318adc5adffbaba3f2f13af71b08367\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\\\"],\\\"sizeBytes\\\":468435751},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f7b139fc67972daf070411a2137da81f179d753ddaafa8d3c791165a9564dff\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49ae6b645f147135aff6d0f464fcd64972abf57733f027318ca7376691eece01\\\"],\\\"sizeBytes\\\":426505480},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:165f05fdd7b633269db2465df57b674feec3a050388e931c6a481546e7b63ae9\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:232019e2eb8e13139570277b223ff822086fa83edc73958cbf919d6b57118068\\\"],\\\"sizeBytes\\\":426337527},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d87b9fedbc92cc502b5f435d9d5798507256bad49eda2040ac3645623616b5f5\\\"],\\\"sizeBytes\\\":420585449},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91034a2e8fa729a060ad18831a3c6e5de5d2b7b3de437b198ddc24fcb724dcf6\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f85524150d750c02366f1cff4380fbe657bea321e18b6f2c12c16153bae7e0\\\"],\\\"sizeBytes\\\":412926967},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76a0009edee9ca69023b834b7eff2d2885fc5d8744dc34a058abc09ca6e45518\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715\\\"],\\\"sizeBytes\\\":396599503}]}}\" for node \"ip-10-0-129-204.ec2.internal\": Patch \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal/status?timeout=10s\": context deadline exceeded" Apr 24 16:46:39.214178 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:39.214141 2578 generic.go:358] "Generic (PLEG): container finished" podID="faec62ed-4955-41ae-96c6-7fa5fab7f996" containerID="e0f7614471e9dc05328d154b0c6959524153e25aa56b604a6e5f271e6adc5d92" exitCode=0 Apr 24 16:46:39.214570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:39.214219 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" event={"ID":"faec62ed-4955-41ae-96c6-7fa5fab7f996","Type":"ContainerDied","Data":"e0f7614471e9dc05328d154b0c6959524153e25aa56b604a6e5f271e6adc5d92"} Apr 24 16:46:40.218644 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:40.218607 2578 generic.go:358] "Generic (PLEG): container finished" podID="faec62ed-4955-41ae-96c6-7fa5fab7f996" containerID="444741c13052bc8f4168950b16b06e25554f56a96dd7550fa32348736f28efca" exitCode=0 Apr 24 16:46:40.219027 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:40.218663 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" event={"ID":"faec62ed-4955-41ae-96c6-7fa5fab7f996","Type":"ContainerDied","Data":"444741c13052bc8f4168950b16b06e25554f56a96dd7550fa32348736f28efca"} Apr 24 16:46:41.223316 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:41.223109 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" event={"ID":"faec62ed-4955-41ae-96c6-7fa5fab7f996","Type":"ContainerStarted","Data":"44c8b6aedf551399be7bb357d3d0930f9d9b1171c05e8ea34e1c40ed320e46a0"} Apr 24 16:46:46.754007 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:46.753965 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded" Apr 24 16:46:46.939276 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:46.939242 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-129-204.ec2.internal\": Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:46:51.187735 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:51.187701 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" Apr 24 16:46:52.291911 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.291867 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.291911 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.291887 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-fvkdr\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.291911 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.291906 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.291911 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.291894 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.291943 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.291955 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.291980 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292001 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292016 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292022 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-mlsxz\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292011 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-8jfrg\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292052 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292062 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292020 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292087 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292098 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-wwhfn\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292111 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292115 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292120 2578 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292124 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292131 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"default-dockercfg-lcsbh\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292158 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292164 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-z9jzk\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292166 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292174 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292160 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292185 2578 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292192 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-9h4fl\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292159 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292207 2578 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292161 2578 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292220 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292226 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292234 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292227 2578 reflector.go:556] "Warning: watch ended with error" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292236 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292129 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292257 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292208 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292206 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-7wgt2\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292241 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292293 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"konnectivity-agent\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292299 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-crgbx\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:52.292107 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": http2: client connection lost" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292357 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292300 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.292589 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292275 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-multus\"/\"default-dockercfg-bbbr5\"" type="*v1.Secret" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:52.294041 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:52.292311 2578 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/openshift-multus/events\": http2: client connection lost" event="&Event{ObjectMeta:{multus-additional-cni-plugins-nvmzv.18a958ceeaf3fdfb openshift-multus 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:multus-additional-cni-plugins-nvmzv,UID:faec62ed-4955-41ae-96c6-7fa5fab7f996,APIVersion:v1,ResourceVersion:10962,FieldPath:spec.initContainers{bond-cni-plugin},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91034a2e8fa729a060ad18831a3c6e5de5d2b7b3de437b198ddc24fcb724dcf6\" in 8.567s (8.567s including waiting). Image size: 412926967 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-129-204.ec2.internal,},FirstTimestamp:2026-04-24 16:46:13.702737403 +0000 UTC m=+57.299103671,LastTimestamp:2026-04-24 16:46:13.702737403 +0000 UTC m=+57.299103671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-129-204.ec2.internal,}" Apr 24 16:46:52.294041 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:52.292271 2578 reflector.go:556] "Warning: watch ended with error" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 24 16:46:56.940476 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:46:56.940440 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-129-204.ec2.internal\": Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded - error from a previous attempt: http2: client connection lost" Apr 24 16:46:59.629107 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.629075 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 24 16:46:59.630340 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.630323 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.631334 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.631316 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.636057 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.636042 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-7wgt2\"" Apr 24 16:46:59.636110 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.636056 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-mlsxz\"" Apr 24 16:46:59.636146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.636114 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.636227 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.636207 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 24 16:46:59.636274 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.636266 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-wwhfn\"" Apr 24 16:46:59.636697 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.636678 2578 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 16:46:59.637186 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.637172 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 24 16:46:59.637230 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.637183 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 24 16:46:59.637336 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.637317 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.637399 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.637352 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 24 16:46:59.637466 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.637444 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-z9jzk\"" Apr 24 16:46:59.637522 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.637456 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.637522 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.637514 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 24 16:46:59.637616 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.637547 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-crgbx\"" Apr 24 16:46:59.637616 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.637579 2578 trace.go:236] Trace[550895753]: "Reflector ListAndWatch" name:object-"openshift-network-diagnostics"/"default-dockercfg-v7jl2" (24-Apr-2026 16:46:36.917) (total time: 22720ms): Apr 24 16:46:59.637616 ip-10-0-129-204 kubenswrapper[2578]: Trace[550895753]: ---"Objects listed" error: 22720ms (16:46:59.637) Apr 24 16:46:59.637616 ip-10-0-129-204 kubenswrapper[2578]: Trace[550895753]: [22.720358685s] [22.720358685s] END Apr 24 16:46:59.637616 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.637591 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-v7jl2\"" Apr 24 16:46:59.643232 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.643211 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.643431 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.643417 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.644029 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.644014 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 24 16:46:59.644069 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.644060 2578 trace.go:236] Trace[770891815]: "Reflector ListAndWatch" name:object-"openshift-network-diagnostics"/"openshift-service-ca.crt" (24-Apr-2026 16:46:36.917) (total time: 22726ms): Apr 24 16:46:59.644069 ip-10-0-129-204 kubenswrapper[2578]: Trace[770891815]: ---"Objects listed" error: 22726ms (16:46:59.644) Apr 24 16:46:59.644069 ip-10-0-129-204 kubenswrapper[2578]: Trace[770891815]: [22.7266908s] [22.7266908s] END Apr 24 16:46:59.644159 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.644074 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.644564 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.644550 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-lcsbh\"" Apr 24 16:46:59.644634 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.644595 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-bbbr5\"" Apr 24 16:46:59.644634 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.644606 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 24 16:46:59.644733 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.644696 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-fvkdr\"" Apr 24 16:46:59.644849 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.644835 2578 trace.go:236] Trace[1971799848]: "Reflector ListAndWatch" name:object-"openshift-multus"/"metrics-daemon-secret" (24-Apr-2026 16:46:36.917) (total time: 22727ms): Apr 24 16:46:59.644849 ip-10-0-129-204 kubenswrapper[2578]: Trace[1971799848]: ---"Objects listed" error: 22727ms (16:46:59.644) Apr 24 16:46:59.644849 ip-10-0-129-204 kubenswrapper[2578]: Trace[1971799848]: [22.727498243s] [22.727498243s] END Apr 24 16:46:59.644964 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.644851 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 24 16:46:59.649099 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.649081 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.656751 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.656736 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.657128 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.657114 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 24 16:46:59.657597 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.657584 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.658186 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.658172 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.658281 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.658203 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.658408 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.658391 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 24 16:46:59.658651 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.658634 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-9h4fl\"" Apr 24 16:46:59.658736 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.658687 2578 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 16:46:59.659345 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.659328 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeReady" Apr 24 16:46:59.664280 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.664252 2578 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 16:46:59.664280 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.664277 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.664451 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.664312 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.664451 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.664426 2578 trace.go:236] Trace[1305973654]: "Reflector ListAndWatch" name:object-"openshift-network-diagnostics"/"kube-root-ca.crt" (24-Apr-2026 16:46:36.917) (total time: 22747ms): Apr 24 16:46:59.664451 ip-10-0-129-204 kubenswrapper[2578]: Trace[1305973654]: ---"Objects listed" error: 22747ms (16:46:59.664) Apr 24 16:46:59.664451 ip-10-0-129-204 kubenswrapper[2578]: Trace[1305973654]: [22.747078281s] [22.747078281s] END Apr 24 16:46:59.664451 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.664443 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.664744 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.664726 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.664841 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.664796 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-8jfrg\"" Apr 24 16:46:59.664961 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.664945 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.665428 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.665410 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 24 16:46:59.665505 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.665410 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 24 16:46:59.665505 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.665411 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 24 16:46:59.665607 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.665583 2578 trace.go:236] Trace[1944261913]: "Reflector ListAndWatch" name:object-"openshift-multus"/"metrics-daemon-sa-dockercfg-xv2x8" (24-Apr-2026 16:46:36.917) (total time: 22748ms): Apr 24 16:46:59.665607 ip-10-0-129-204 kubenswrapper[2578]: Trace[1944261913]: ---"Objects listed" error: 22748ms (16:46:59.665) Apr 24 16:46:59.665607 ip-10-0-129-204 kubenswrapper[2578]: Trace[1944261913]: [22.748435564s] [22.748435564s] END Apr 24 16:46:59.665607 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.665585 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 24 16:46:59.665607 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.665594 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-xv2x8\"" Apr 24 16:46:59.665607 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.665584 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 24 16:46:59.665862 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.665584 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 24 16:46:59.666511 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.666495 2578 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 24 16:46:59.667588 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.667570 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-rcps7","openshift-multus/network-metrics-daemon-8jmlx"] Apr 24 16:46:59.672091 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.672070 2578 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 16:46:59.694340 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.694321 2578 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-204.ec2.internal" event="NodeReady" Apr 24 16:46:59.694425 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.694406 2578 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 24 16:46:59.753026 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.752983 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nvmzv" podStartSLOduration=22.7902288 podStartE2EDuration="1m29.75297024s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:45:31.157125744 +0000 UTC m=+14.753492008" lastFinishedPulling="2026-04-24 16:46:38.119867176 +0000 UTC m=+81.716233448" observedRunningTime="2026-04-24 16:46:59.731365549 +0000 UTC m=+103.327731835" watchObservedRunningTime="2026-04-24 16:46:59.75297024 +0000 UTC m=+103.349336530" Apr 24 16:46:59.753471 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.753458 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dj4h8"] Apr 24 16:46:59.756332 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.756315 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.756442 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.756427 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-hbhwf"] Apr 24 16:46:59.759252 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.759236 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.766977 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.766961 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-zvkrd\"" Apr 24 16:46:59.767071 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.767034 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 24 16:46:59.767147 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.767131 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 24 16:46:59.767331 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.767315 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.767385 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.767342 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.767385 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.767365 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-6n2jd\"" Apr 24 16:46:59.767478 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.767395 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 24 16:46:59.767478 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.767446 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 24 16:46:59.773892 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.773873 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dj4h8"] Apr 24 16:46:59.792004 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.791966 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9d4p6" podStartSLOduration=47.200133524 podStartE2EDuration="1m29.791954676s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:45:31.16804627 +0000 UTC m=+14.764412533" lastFinishedPulling="2026-04-24 16:46:13.759867418 +0000 UTC m=+57.356233685" observedRunningTime="2026-04-24 16:46:59.78828957 +0000 UTC m=+103.384655854" watchObservedRunningTime="2026-04-24 16:46:59.791954676 +0000 UTC m=+103.388320958" Apr 24 16:46:59.792926 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.792909 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-hbhwf"] Apr 24 16:46:59.850725 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.850695 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.850725 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.850726 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-data-volume\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.850867 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.850746 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-crio-socket\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.850867 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.850761 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.850867 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.850777 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-config-volume\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.850973 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.850876 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-tmp-dir\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.850973 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.850907 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-metrics-tls\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.850973 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.850924 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbxbf\" (UniqueName: \"kubernetes.io/projected/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-kube-api-access-mbxbf\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.850973 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.850946 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9drbz\" (UniqueName: \"kubernetes.io/projected/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-kube-api-access-9drbz\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.860680 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.860651 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xcsf7"] Apr 24 16:46:59.863454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.863439 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xcsf7" Apr 24 16:46:59.866924 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.866909 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-szcnf\"" Apr 24 16:46:59.867013 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.866916 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fsp54" podStartSLOduration=47.259634813 podStartE2EDuration="1m29.866897498s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:45:31.138855303 +0000 UTC m=+14.735221569" lastFinishedPulling="2026-04-24 16:46:13.746117984 +0000 UTC m=+57.342484254" observedRunningTime="2026-04-24 16:46:59.866193252 +0000 UTC m=+103.462559537" watchObservedRunningTime="2026-04-24 16:46:59.866897498 +0000 UTC m=+103.463263783" Apr 24 16:46:59.867127 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.867107 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 24 16:46:59.867646 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.867629 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 24 16:46:59.869938 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.869923 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 24 16:46:59.880463 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.880416 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xcsf7"] Apr 24 16:46:59.951408 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951385 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-985x9\" (UniqueName: \"kubernetes.io/projected/2eb66152-aaca-4639-9b66-5bfa5656f3c4-kube-api-access-985x9\") pod \"ingress-canary-xcsf7\" (UID: \"2eb66152-aaca-4639-9b66-5bfa5656f3c4\") " pod="openshift-ingress-canary/ingress-canary-xcsf7" Apr 24 16:46:59.951493 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951426 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.951493 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951444 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-data-volume\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.951562 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951534 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-crio-socket\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.951600 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951567 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.951600 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951592 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-config-volume\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.951670 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951634 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-crio-socket\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.951708 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951682 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-data-volume\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.951746 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951723 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-tmp-dir\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.951784 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951749 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2eb66152-aaca-4639-9b66-5bfa5656f3c4-cert\") pod \"ingress-canary-xcsf7\" (UID: \"2eb66152-aaca-4639-9b66-5bfa5656f3c4\") " pod="openshift-ingress-canary/ingress-canary-xcsf7" Apr 24 16:46:59.951784 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951777 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-metrics-tls\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.951908 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951825 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mbxbf\" (UniqueName: \"kubernetes.io/projected/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-kube-api-access-mbxbf\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.951908 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.951876 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9drbz\" (UniqueName: \"kubernetes.io/projected/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-kube-api-access-9drbz\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.952036 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.952016 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.952094 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.952051 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-tmp-dir\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.952143 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.952104 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-config-volume\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.955470 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.955449 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:46:59.955543 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.955489 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-metrics-tls\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.968407 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.968389 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbxbf\" (UniqueName: \"kubernetes.io/projected/e0b4ca8b-4a38-48a0-a607-1d9984f02dd3-kube-api-access-mbxbf\") pod \"dns-default-dj4h8\" (UID: \"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3\") " pod="openshift-dns/dns-default-dj4h8" Apr 24 16:46:59.968801 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:46:59.968785 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9drbz\" (UniqueName: \"kubernetes.io/projected/bf454f3d-bcaf-4816-b706-91aac8d5a4c1-kube-api-access-9drbz\") pod \"insights-runtime-extractor-hbhwf\" (UID: \"bf454f3d-bcaf-4816-b706-91aac8d5a4c1\") " pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:47:00.052986 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.052960 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-985x9\" (UniqueName: \"kubernetes.io/projected/2eb66152-aaca-4639-9b66-5bfa5656f3c4-kube-api-access-985x9\") pod \"ingress-canary-xcsf7\" (UID: \"2eb66152-aaca-4639-9b66-5bfa5656f3c4\") " pod="openshift-ingress-canary/ingress-canary-xcsf7" Apr 24 16:47:00.053167 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.053090 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2eb66152-aaca-4639-9b66-5bfa5656f3c4-cert\") pod \"ingress-canary-xcsf7\" (UID: \"2eb66152-aaca-4639-9b66-5bfa5656f3c4\") " pod="openshift-ingress-canary/ingress-canary-xcsf7" Apr 24 16:47:00.055224 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.055207 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2eb66152-aaca-4639-9b66-5bfa5656f3c4-cert\") pod \"ingress-canary-xcsf7\" (UID: \"2eb66152-aaca-4639-9b66-5bfa5656f3c4\") " pod="openshift-ingress-canary/ingress-canary-xcsf7" Apr 24 16:47:00.064263 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.064239 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-985x9\" (UniqueName: \"kubernetes.io/projected/2eb66152-aaca-4639-9b66-5bfa5656f3c4-kube-api-access-985x9\") pod \"ingress-canary-xcsf7\" (UID: \"2eb66152-aaca-4639-9b66-5bfa5656f3c4\") " pod="openshift-ingress-canary/ingress-canary-xcsf7" Apr 24 16:47:00.066011 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.065997 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dj4h8" Apr 24 16:47:00.072522 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.072504 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-hbhwf" Apr 24 16:47:00.172514 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.172356 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xcsf7" Apr 24 16:47:00.214599 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.214569 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-hbhwf"] Apr 24 16:47:00.215914 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.215892 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dj4h8"] Apr 24 16:47:00.217775 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:47:00.217750 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0b4ca8b_4a38_48a0_a607_1d9984f02dd3.slice/crio-96daf769249e18dca162bd6d287cb8959a18a76e7943e80ee26bb7a25fb9005b WatchSource:0}: Error finding container 96daf769249e18dca162bd6d287cb8959a18a76e7943e80ee26bb7a25fb9005b: Status 404 returned error can't find the container with id 96daf769249e18dca162bd6d287cb8959a18a76e7943e80ee26bb7a25fb9005b Apr 24 16:47:00.218246 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:47:00.218217 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf454f3d_bcaf_4816_b706_91aac8d5a4c1.slice/crio-18c78c070a730a35f02ab869b6b127e2e4a5db75a01e1b75cd5a006726f46a68 WatchSource:0}: Error finding container 18c78c070a730a35f02ab869b6b127e2e4a5db75a01e1b75cd5a006726f46a68: Status 404 returned error can't find the container with id 18c78c070a730a35f02ab869b6b127e2e4a5db75a01e1b75cd5a006726f46a68 Apr 24 16:47:00.258743 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.258701 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-hbhwf" event={"ID":"bf454f3d-bcaf-4816-b706-91aac8d5a4c1","Type":"ContainerStarted","Data":"18c78c070a730a35f02ab869b6b127e2e4a5db75a01e1b75cd5a006726f46a68"} Apr 24 16:47:00.259792 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.259755 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dj4h8" event={"ID":"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3","Type":"ContainerStarted","Data":"96daf769249e18dca162bd6d287cb8959a18a76e7943e80ee26bb7a25fb9005b"} Apr 24 16:47:00.300285 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:00.300210 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xcsf7"] Apr 24 16:47:00.303061 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:47:00.303038 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eb66152_aaca_4639_9b66_5bfa5656f3c4.slice/crio-dd10d40fe0dca9e27e156b0d3e0a136ffa445aa2292dae8de2aa1093a210844a WatchSource:0}: Error finding container dd10d40fe0dca9e27e156b0d3e0a136ffa445aa2292dae8de2aa1093a210844a: Status 404 returned error can't find the container with id dd10d40fe0dca9e27e156b0d3e0a136ffa445aa2292dae8de2aa1093a210844a Apr 24 16:47:01.264737 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:01.264650 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-hbhwf" event={"ID":"bf454f3d-bcaf-4816-b706-91aac8d5a4c1","Type":"ContainerStarted","Data":"79e27923c1ae232f7a28fd0fed2d70dde422d6e7cf421691a68ad09dd1a1a909"} Apr 24 16:47:01.264737 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:01.264696 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-hbhwf" event={"ID":"bf454f3d-bcaf-4816-b706-91aac8d5a4c1","Type":"ContainerStarted","Data":"3f9cbcec40d5b688ba6f6442f8251135327f589630f2ca35e9593560dd918111"} Apr 24 16:47:01.265792 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:01.265753 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xcsf7" event={"ID":"2eb66152-aaca-4639-9b66-5bfa5656f3c4","Type":"ContainerStarted","Data":"dd10d40fe0dca9e27e156b0d3e0a136ffa445aa2292dae8de2aa1093a210844a"} Apr 24 16:47:03.271733 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:03.271477 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dj4h8" event={"ID":"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3","Type":"ContainerStarted","Data":"8123667aa52aa2480fdc3a3d9401e01d8055f047f379126aef1e0c84858a7dd7"} Apr 24 16:47:03.272143 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:03.271740 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dj4h8" event={"ID":"e0b4ca8b-4a38-48a0-a607-1d9984f02dd3","Type":"ContainerStarted","Data":"84f771a12fcea63d380a9b21f85694884fd298e35a2b60c3d8a2e35b780640a4"} Apr 24 16:47:03.272143 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:03.271762 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-dj4h8" Apr 24 16:47:03.272690 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:03.272670 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xcsf7" event={"ID":"2eb66152-aaca-4639-9b66-5bfa5656f3c4","Type":"ContainerStarted","Data":"c470fcccb53b95fcbfd4f6ebacc9c20dd4ba67388509648877e2fa15e31114c8"} Apr 24 16:47:03.294090 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:03.294040 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dj4h8" podStartSLOduration=2.307885694 podStartE2EDuration="4.294027221s" podCreationTimestamp="2026-04-24 16:46:59 +0000 UTC" firstStartedPulling="2026-04-24 16:47:00.220190127 +0000 UTC m=+103.816556394" lastFinishedPulling="2026-04-24 16:47:02.206331651 +0000 UTC m=+105.802697921" observedRunningTime="2026-04-24 16:47:03.29255538 +0000 UTC m=+106.888921666" watchObservedRunningTime="2026-04-24 16:47:03.294027221 +0000 UTC m=+106.890393548" Apr 24 16:47:03.325463 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:03.325426 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xcsf7" podStartSLOduration=2.42012855 podStartE2EDuration="4.325413491s" podCreationTimestamp="2026-04-24 16:46:59 +0000 UTC" firstStartedPulling="2026-04-24 16:47:00.305081621 +0000 UTC m=+103.901447885" lastFinishedPulling="2026-04-24 16:47:02.210366548 +0000 UTC m=+105.806732826" observedRunningTime="2026-04-24 16:47:03.325360572 +0000 UTC m=+106.921726857" watchObservedRunningTime="2026-04-24 16:47:03.325413491 +0000 UTC m=+106.921779770" Apr 24 16:47:04.277085 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:04.277040 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-hbhwf" event={"ID":"bf454f3d-bcaf-4816-b706-91aac8d5a4c1","Type":"ContainerStarted","Data":"f0535e937f2af7ee8504752c286932724235657c625620cdd5ca56cb7e43b93c"} Apr 24 16:47:04.300094 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:04.300045 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-hbhwf" podStartSLOduration=2.286032687 podStartE2EDuration="5.300033003s" podCreationTimestamp="2026-04-24 16:46:59 +0000 UTC" firstStartedPulling="2026-04-24 16:47:00.288607198 +0000 UTC m=+103.884973461" lastFinishedPulling="2026-04-24 16:47:03.302607515 +0000 UTC m=+106.898973777" observedRunningTime="2026-04-24 16:47:04.299796034 +0000 UTC m=+107.896162342" watchObservedRunningTime="2026-04-24 16:47:04.300033003 +0000 UTC m=+107.896399288" Apr 24 16:47:13.279257 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:13.279219 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dj4h8" Apr 24 16:47:38.807672 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:38.807622 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:47:38.810770 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:38.810748 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 24 16:47:38.822006 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:38.821983 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda8f1f0-9635-43d2-9f82-9831f8800481-metrics-certs\") pod \"network-metrics-daemon-8jmlx\" (UID: \"dda8f1f0-9635-43d2-9f82-9831f8800481\") " pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:47:38.908557 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:38.908521 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:47:38.911664 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:38.911649 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 24 16:47:38.921436 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:38.921417 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 24 16:47:38.930879 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:38.930861 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m5bc\" (UniqueName: \"kubernetes.io/projected/c8f211dc-e214-4e02-b487-47c0952e8984-kube-api-access-5m5bc\") pod \"network-check-target-rcps7\" (UID: \"c8f211dc-e214-4e02-b487-47c0952e8984\") " pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:47:39.030281 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:39.030259 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-xv2x8\"" Apr 24 16:47:39.035386 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:39.035370 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-v7jl2\"" Apr 24 16:47:39.038352 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:39.038338 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jmlx" Apr 24 16:47:39.043891 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:39.043872 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:47:39.166427 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:39.166407 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8jmlx"] Apr 24 16:47:39.168370 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:47:39.168347 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddda8f1f0_9635_43d2_9f82_9831f8800481.slice/crio-24c495e77847c3dcb8c7d1d654e41527d7bddcd5829e6b1601541f15055a7882 WatchSource:0}: Error finding container 24c495e77847c3dcb8c7d1d654e41527d7bddcd5829e6b1601541f15055a7882: Status 404 returned error can't find the container with id 24c495e77847c3dcb8c7d1d654e41527d7bddcd5829e6b1601541f15055a7882 Apr 24 16:47:39.186042 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:39.186018 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-rcps7"] Apr 24 16:47:39.189819 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:47:39.189781 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8f211dc_e214_4e02_b487_47c0952e8984.slice/crio-f6c7fbe9de3fa14b12a68a78bd2a5c2c1633683f429ffe574afd70830ad6be99 WatchSource:0}: Error finding container f6c7fbe9de3fa14b12a68a78bd2a5c2c1633683f429ffe574afd70830ad6be99: Status 404 returned error can't find the container with id f6c7fbe9de3fa14b12a68a78bd2a5c2c1633683f429ffe574afd70830ad6be99 Apr 24 16:47:39.366027 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:39.365968 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-rcps7" event={"ID":"c8f211dc-e214-4e02-b487-47c0952e8984","Type":"ContainerStarted","Data":"f6c7fbe9de3fa14b12a68a78bd2a5c2c1633683f429ffe574afd70830ad6be99"} Apr 24 16:47:39.366779 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:39.366762 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8jmlx" event={"ID":"dda8f1f0-9635-43d2-9f82-9831f8800481","Type":"ContainerStarted","Data":"24c495e77847c3dcb8c7d1d654e41527d7bddcd5829e6b1601541f15055a7882"} Apr 24 16:47:47.387038 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:47.386986 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8jmlx" event={"ID":"dda8f1f0-9635-43d2-9f82-9831f8800481","Type":"ContainerStarted","Data":"fd4f94e8dc1ed66d588d8aa0211fc738bd41be08ff7909432918ec6e88b50ced"} Apr 24 16:47:48.392097 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:48.392063 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8jmlx" event={"ID":"dda8f1f0-9635-43d2-9f82-9831f8800481","Type":"ContainerStarted","Data":"80fdbe7b960a1ea00ff2572bbfa268f235d99db53c5e8862d51fe57f9e326e7b"} Apr 24 16:47:48.410901 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:48.410852 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-8jmlx" podStartSLOduration=130.422988944 podStartE2EDuration="2m18.410839859s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:47:39.169925833 +0000 UTC m=+142.766292100" lastFinishedPulling="2026-04-24 16:47:47.157776749 +0000 UTC m=+150.754143015" observedRunningTime="2026-04-24 16:47:48.410333993 +0000 UTC m=+152.006700279" watchObservedRunningTime="2026-04-24 16:47:48.410839859 +0000 UTC m=+152.007206143" Apr 24 16:47:49.396041 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:49.396003 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-rcps7" event={"ID":"c8f211dc-e214-4e02-b487-47c0952e8984","Type":"ContainerStarted","Data":"5a55ebdff2c57c292dc5239c0ad1c04ae4785518fb79a0bc754beba20bd1d316"} Apr 24 16:47:49.396406 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:49.396149 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:47:49.416605 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:47:49.416562 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-rcps7" podStartSLOduration=129.877290684 podStartE2EDuration="2m19.416552383s" podCreationTimestamp="2026-04-24 16:45:30 +0000 UTC" firstStartedPulling="2026-04-24 16:47:39.191466593 +0000 UTC m=+142.787832859" lastFinishedPulling="2026-04-24 16:47:48.730728295 +0000 UTC m=+152.327094558" observedRunningTime="2026-04-24 16:47:49.41569527 +0000 UTC m=+153.012061566" watchObservedRunningTime="2026-04-24 16:47:49.416552383 +0000 UTC m=+153.012918670" Apr 24 16:48:03.974948 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:03.974834 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-62rlm"] Apr 24 16:48:03.977741 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:03.977725 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:03.980143 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:03.980123 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 24 16:48:03.984405 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:03.984385 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-62rlm"] Apr 24 16:48:04.052130 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.052099 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/2fb12526-7d12-4304-a9c9-f8975b13ac2b-original-pull-secret\") pod \"global-pull-secret-syncer-62rlm\" (UID: \"2fb12526-7d12-4304-a9c9-f8975b13ac2b\") " pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:04.052130 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.052134 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/2fb12526-7d12-4304-a9c9-f8975b13ac2b-kubelet-config\") pod \"global-pull-secret-syncer-62rlm\" (UID: \"2fb12526-7d12-4304-a9c9-f8975b13ac2b\") " pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:04.052335 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.052151 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/2fb12526-7d12-4304-a9c9-f8975b13ac2b-dbus\") pod \"global-pull-secret-syncer-62rlm\" (UID: \"2fb12526-7d12-4304-a9c9-f8975b13ac2b\") " pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:04.156454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.153328 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/2fb12526-7d12-4304-a9c9-f8975b13ac2b-original-pull-secret\") pod \"global-pull-secret-syncer-62rlm\" (UID: \"2fb12526-7d12-4304-a9c9-f8975b13ac2b\") " pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:04.156454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.153489 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/2fb12526-7d12-4304-a9c9-f8975b13ac2b-kubelet-config\") pod \"global-pull-secret-syncer-62rlm\" (UID: \"2fb12526-7d12-4304-a9c9-f8975b13ac2b\") " pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:04.156454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.153534 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/2fb12526-7d12-4304-a9c9-f8975b13ac2b-dbus\") pod \"global-pull-secret-syncer-62rlm\" (UID: \"2fb12526-7d12-4304-a9c9-f8975b13ac2b\") " pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:04.156454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.153779 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/2fb12526-7d12-4304-a9c9-f8975b13ac2b-dbus\") pod \"global-pull-secret-syncer-62rlm\" (UID: \"2fb12526-7d12-4304-a9c9-f8975b13ac2b\") " pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:04.156454 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.153885 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/2fb12526-7d12-4304-a9c9-f8975b13ac2b-kubelet-config\") pod \"global-pull-secret-syncer-62rlm\" (UID: \"2fb12526-7d12-4304-a9c9-f8975b13ac2b\") " pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:04.158215 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.158193 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/2fb12526-7d12-4304-a9c9-f8975b13ac2b-original-pull-secret\") pod \"global-pull-secret-syncer-62rlm\" (UID: \"2fb12526-7d12-4304-a9c9-f8975b13ac2b\") " pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:04.286066 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.286009 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-62rlm" Apr 24 16:48:04.394220 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.394195 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-62rlm"] Apr 24 16:48:04.397268 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:48:04.397241 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fb12526_7d12_4304_a9c9_f8975b13ac2b.slice/crio-cce990f67000f1e4b76855ce0d50f79053967989a214ad3bd16fcde1e435e88e WatchSource:0}: Error finding container cce990f67000f1e4b76855ce0d50f79053967989a214ad3bd16fcde1e435e88e: Status 404 returned error can't find the container with id cce990f67000f1e4b76855ce0d50f79053967989a214ad3bd16fcde1e435e88e Apr 24 16:48:04.440847 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:04.440820 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-62rlm" event={"ID":"2fb12526-7d12-4304-a9c9-f8975b13ac2b","Type":"ContainerStarted","Data":"cce990f67000f1e4b76855ce0d50f79053967989a214ad3bd16fcde1e435e88e"} Apr 24 16:48:08.453502 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:08.453470 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-62rlm" event={"ID":"2fb12526-7d12-4304-a9c9-f8975b13ac2b","Type":"ContainerStarted","Data":"2dfde39b1235985308268b1309ceccd3f7f027a33c1d25d3b747fe5ed716044e"} Apr 24 16:48:08.472822 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:08.472760 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-62rlm" podStartSLOduration=1.863164347 podStartE2EDuration="5.472745873s" podCreationTimestamp="2026-04-24 16:48:03 +0000 UTC" firstStartedPulling="2026-04-24 16:48:04.3988888 +0000 UTC m=+167.995255063" lastFinishedPulling="2026-04-24 16:48:08.008470325 +0000 UTC m=+171.604836589" observedRunningTime="2026-04-24 16:48:08.472459156 +0000 UTC m=+172.068825444" watchObservedRunningTime="2026-04-24 16:48:08.472745873 +0000 UTC m=+172.069112158" Apr 24 16:48:20.401122 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:20.401090 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-rcps7" Apr 24 16:48:26.794662 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.794630 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf"] Apr 24 16:48:26.797625 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.797607 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:26.802213 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.802193 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 24 16:48:26.803428 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.803402 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-v8rfj\"" Apr 24 16:48:26.803592 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.803427 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 24 16:48:26.811197 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.811170 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf"] Apr 24 16:48:26.895872 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.895846 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:26.895996 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.895879 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:26.895996 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.895914 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9mqx\" (UniqueName: \"kubernetes.io/projected/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-kube-api-access-b9mqx\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:26.996975 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.996949 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:26.997110 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.996992 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9mqx\" (UniqueName: \"kubernetes.io/projected/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-kube-api-access-b9mqx\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:26.997110 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.997023 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:26.997356 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.997332 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:26.997416 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:26.997397 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:27.007701 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:27.007681 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9mqx\" (UniqueName: \"kubernetes.io/projected/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-kube-api-access-b9mqx\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:27.106336 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:27.106283 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:27.221612 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:27.221584 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf"] Apr 24 16:48:27.225183 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:48:27.225140 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16ae908_6ec0_41b9_bb9a_b98dc9223b7f.slice/crio-484bae8b3a1cf37ceb35b5da92bab36326716316d7beceded054189662d27179 WatchSource:0}: Error finding container 484bae8b3a1cf37ceb35b5da92bab36326716316d7beceded054189662d27179: Status 404 returned error can't find the container with id 484bae8b3a1cf37ceb35b5da92bab36326716316d7beceded054189662d27179 Apr 24 16:48:27.508756 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:27.508722 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" event={"ID":"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f","Type":"ContainerStarted","Data":"484bae8b3a1cf37ceb35b5da92bab36326716316d7beceded054189662d27179"} Apr 24 16:48:33.525378 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:33.525340 2578 generic.go:358] "Generic (PLEG): container finished" podID="f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" containerID="6854422fe2cbec909cb30728389d1f86281a598dd4ffbfa5bd737005b7f0484c" exitCode=0 Apr 24 16:48:33.525745 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:33.525386 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" event={"ID":"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f","Type":"ContainerDied","Data":"6854422fe2cbec909cb30728389d1f86281a598dd4ffbfa5bd737005b7f0484c"} Apr 24 16:48:35.533456 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:35.533428 2578 generic.go:358] "Generic (PLEG): container finished" podID="f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" containerID="7dd9a572f02d61a748cd0d4b632869fcc39e0eaf5dc11cb15f87696e76284faf" exitCode=0 Apr 24 16:48:35.533836 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:35.533499 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" event={"ID":"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f","Type":"ContainerDied","Data":"7dd9a572f02d61a748cd0d4b632869fcc39e0eaf5dc11cb15f87696e76284faf"} Apr 24 16:48:41.552214 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:41.552187 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" event={"ID":"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f","Type":"ContainerStarted","Data":"ebea65544607449328565ce5c49cef26806b2d3fb13edc2b13fc0c06b577e763"} Apr 24 16:48:51.582592 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:51.582546 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf_f16ae908-6ec0-41b9-bb9a-b98dc9223b7f/extract/0.log" Apr 24 16:48:51.583243 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:51.583217 2578 generic.go:358] "Generic (PLEG): container finished" podID="f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" containerID="ebea65544607449328565ce5c49cef26806b2d3fb13edc2b13fc0c06b577e763" exitCode=1 Apr 24 16:48:51.583311 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:51.583256 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" event={"ID":"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f","Type":"ContainerDied","Data":"ebea65544607449328565ce5c49cef26806b2d3fb13edc2b13fc0c06b577e763"} Apr 24 16:48:52.708229 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.708208 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf_f16ae908-6ec0-41b9-bb9a-b98dc9223b7f/extract/0.log" Apr 24 16:48:52.708840 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.708803 2578 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:52.781998 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.781975 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-util\") pod \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " Apr 24 16:48:52.782087 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.782040 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-bundle\") pod \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " Apr 24 16:48:52.782087 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.782073 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9mqx\" (UniqueName: \"kubernetes.io/projected/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-kube-api-access-b9mqx\") pod \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\" (UID: \"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f\") " Apr 24 16:48:52.782597 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.782575 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-bundle" (OuterVolumeSpecName: "bundle") pod "f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" (UID: "f16ae908-6ec0-41b9-bb9a-b98dc9223b7f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 16:48:52.784161 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.784138 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-kube-api-access-b9mqx" (OuterVolumeSpecName: "kube-api-access-b9mqx") pod "f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" (UID: "f16ae908-6ec0-41b9-bb9a-b98dc9223b7f"). InnerVolumeSpecName "kube-api-access-b9mqx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 16:48:52.786506 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.786487 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-util" (OuterVolumeSpecName: "util") pod "f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" (UID: "f16ae908-6ec0-41b9-bb9a-b98dc9223b7f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 16:48:52.882539 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.882521 2578 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-util\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:48:52.882539 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.882540 2578 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-bundle\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:48:52.882650 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:52.882550 2578 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b9mqx\" (UniqueName: \"kubernetes.io/projected/f16ae908-6ec0-41b9-bb9a-b98dc9223b7f-kube-api-access-b9mqx\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:48:53.590243 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:53.590214 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf_f16ae908-6ec0-41b9-bb9a-b98dc9223b7f/extract/0.log" Apr 24 16:48:53.590825 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:53.590783 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" event={"ID":"f16ae908-6ec0-41b9-bb9a-b98dc9223b7f","Type":"ContainerDied","Data":"484bae8b3a1cf37ceb35b5da92bab36326716316d7beceded054189662d27179"} Apr 24 16:48:53.590915 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:53.590837 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="484bae8b3a1cf37ceb35b5da92bab36326716316d7beceded054189662d27179" Apr 24 16:48:53.590915 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:48:53.590872 2578 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" Apr 24 16:48:54.551607 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:48:54.551568 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded" Apr 24 16:49:04.552484 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:49:04.552443 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:49:04.665644 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:49:04.665539 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:48:54Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:48:54Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:48:54Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:48:54Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21aab62140b42b6dc9b5c8143084d89ee3e938eba8811eb0479fc2b6ad6bbd6e\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0\\\"],\\\"sizeBytes\\\":1592330346},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da4bec2f08680a3155ddcbb96f8594244976dae6fc08fc0f5878c4b0a456b92e\\\"],\\\"sizeBytes\\\":1267137864},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ff861a4f4064f34ed8215c549b58ea833762ff00985f897190743095344c8b2\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1a64699b0d35f7d206a46217f6b854077ea5e4524b566ded00c64cc85d4c1be\\\"],\\\"sizeBytes\\\":1065600018},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99acb485c40736a41dca54d0a983d561e9f0cd87b0a3256d1e5ce0e0d45174b6\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1fc1fcb9645517ab568f2e99b25ded04cfb3971b75bf72188b75347d2808c7b\\\"],\\\"sizeBytes\\\":1065006420},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4ad1f767f2f48a2db76b34811c21cb04afb68e95ef143d2061869deea627a11a\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a3a48b734b960f0231b8efb31ec3c63e746255e8d9879e908af02332df60533d\\\"],\\\"sizeBytes\\\":977364430},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:edd7b883364dcfd9a811079ba1b6106d36063c1dce522a7602a646fc54160570\\\"],\\\"sizeBytes\\\":974678236},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:469446113dc27d84c040c66620f3bbb42aa8aeee7bb3a0a6b6cb374aa5b386ba\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f596c54e96ab5a345df7a8cf1a14c953d39b3b43423c6b3002ba98df2c2fd0a2\\\"],\\\"sizeBytes\\\":884076775},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:87dabed0efcf4f363bbd86487833d817b60cae8e78db0a091305001f3040ea4b\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a86fd02d09596be124562146df9ab5bf33cd7cdfde29f701524b250a0e8beec0\\\"],\\\"sizeBytes\\\":753864795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77f7e33000484395db2216d431d7f91158cbb0ddee564b65288f4ec47e3188b8\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:786bf1f34d3636f95860ebe748f9dc62b84102c612a5b21ae6750c52e9eea253\\\"],\\\"sizeBytes\\\":727300480},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96316433550661db3ef74c1200d3edc0ec9d0b87f2b41589aa7b5e841b6660e3\\\"],\\\"sizeBytes\\\":701151772},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:321285dca5a2f9b911e21badd1e51ce49841ddc45c5c859b3a29f7982d7376cb\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c530f8874aa89acf6d1834480b89067db882a7a0706e37c8fd9539a4401fdff0\\\"],\\\"sizeBytes\\\":644526840},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24a4540aecd65dc2af9b2023150dfb2d385169654f781efe70df51c623076d78\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8\\\"],\\\"sizeBytes\\\":534708291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82dc461ff286831f7476efc8de45fd918b894d4a80d9c285e9a9141fe43b993b\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7e28d9938f33314269d8079649373b4befb36895b8636c4c2ec3c0fee47c91c\\\"],\\\"sizeBytes\\\":533474192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abb4d487159fbc9b7148c690cd3a6ee638680f8f879ff6195ca1be5b393705b0\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c98201142213b52a3c1909f45800b5974157672377ecb8c102621ef164337008\\\"],\\\"sizeBytes\\\":514965743},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f7bf484ae9370ade47453d2e8dd49774694efed83f8431453db8965f642e63b\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98c903cc8c2ea492f4c9047febaa42525c20c5b414d4de2ce3df5eb65ef899e2\\\"],\\\"sizeBytes\\\":514858876},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d97bd10b7c241845d0ed15e34f8d45e82126c1f184316dea148ffabc1cd670a\\\"],\\\"sizeBytes\\\":488332864},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:693650db31be5a14163035ec50174ac9b8d664d327d538eeb3e0c131e16f88c0\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c862e4b31529b530f930a8d0e2a75b53d2092f392a29d037d0312169e1d4a1ac\\\"],\\\"sizeBytes\\\":480938200},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e5d90e04210b2195777322c3270bbeb4397c72a84b5945ccccbb258ed770fb\\\"],\\\"sizeBytes\\\":480736321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c12cfce95cb7b52d129eec71dec91a2cd7e820895ce1b9a111d68e3e3909aa78\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce654d8c5680faaa440b4a68965a0a29cfc189b82420004440da6762273538b2\\\"],\\\"sizeBytes\\\":480669231},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9b333f47df911dd05a9a10cb8ea988e97d3174490b348b8e46a161d9581d64\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:38b41ae697f031205813679347380d7f258be2a57902ad4494285782a241086b\\\"],\\\"sizeBytes\\\":474198918},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c1b871a1e7148de8d1101e925186df33318adc5adffbaba3f2f13af71b08367\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\\\"],\\\"sizeBytes\\\":468435751},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4d914876eb0cd2cf9c345582cdc1a5cf4803a5850ee766b875b8877b5c776df9\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58fb4327e682995ba0eee3fff6d4f4e228f8c4ff1d08b2822b54dee2db41ebf8\\\"],\\\"sizeBytes\\\":450507899},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f7b139fc67972daf070411a2137da81f179d753ddaafa8d3c791165a9564dff\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49ae6b645f147135aff6d0f464fcd64972abf57733f027318ca7376691eece01\\\"],\\\"sizeBytes\\\":426505480},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:165f05fdd7b633269db2465df57b674feec3a050388e931c6a481546e7b63ae9\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:232019e2eb8e13139570277b223ff822086fa83edc73958cbf919d6b57118068\\\"],\\\"sizeBytes\\\":426337527},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d87b9fedbc92cc502b5f435d9d5798507256bad49eda2040ac3645623616b5f5\\\"],\\\"sizeBytes\\\":420585449},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91034a2e8fa729a060ad18831a3c6e5de5d2b7b3de437b198ddc24fcb724dcf6\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f85524150d750c02366f1cff4380fbe657bea321e18b6f2c12c16153bae7e0\\\"],\\\"sizeBytes\\\":412926967},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:18d788b8deb049d24dfdb101371f6d2211a5e731bacd64b08adb97f66b6c4eb1\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43d7e5fe91598427c1fff01aac179d8add7051f71a53a126648cd68ae5d2435f\\\"],\\\"sizeBytes\\\":408523640},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69239195f3911c73a84a911eed79c9d51d0a896f5f3405f8511f52738740d044\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f3fc18a785145e05d543030637e66298ab943dae18b52aa448f6023bd0d8151\\\"],\\\"sizeBytes\\\":405607150},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76a0009edee9ca69023b834b7eff2d2885fc5d8744dc34a058abc09ca6e45518\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715\\\"],\\\"sizeBytes\\\":396599503},{\\\"names\\\":[\\\"registry.redhat.io/custom-metrics-autoscaler/custom-metrics-autoscaler-operator-bundle@sha256:e746b1aafcdcd82a6d2d069478d2870ada48c9f026d3119fc0977b333138c4ba\\\"],\\\"sizeBytes\\\":108540851}]}}\" for node \"ip-10-0-129-204.ec2.internal\": Patch \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:49:07.756048 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:49:07.756010 2578 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ip-10-0-129-204.ec2.internal\": the object has been modified; please apply your changes to the latest version and try again" Apr 24 16:49:07.774768 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:07.774724 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf" podStartSLOduration=27.531075008 podStartE2EDuration="41.774711973s" podCreationTimestamp="2026-04-24 16:48:26 +0000 UTC" firstStartedPulling="2026-04-24 16:48:27.227020543 +0000 UTC m=+190.823386811" lastFinishedPulling="2026-04-24 16:48:41.470657504 +0000 UTC m=+205.067023776" observedRunningTime="2026-04-24 16:49:07.77425883 +0000 UTC m=+231.370625131" watchObservedRunningTime="2026-04-24 16:49:07.774711973 +0000 UTC m=+231.371078275" Apr 24 16:49:08.812317 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.812285 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn"] Apr 24 16:49:08.812681 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.812507 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" containerName="pull" Apr 24 16:49:08.812681 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.812518 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" containerName="pull" Apr 24 16:49:08.812681 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.812528 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" containerName="util" Apr 24 16:49:08.812681 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.812533 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" containerName="util" Apr 24 16:49:08.812681 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.812544 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" containerName="extract" Apr 24 16:49:08.812681 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.812549 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" containerName="extract" Apr 24 16:49:08.812681 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.812586 2578 memory_manager.go:356] "RemoveStaleState removing state" podUID="f16ae908-6ec0-41b9-bb9a-b98dc9223b7f" containerName="extract" Apr 24 16:49:08.819213 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.819192 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:08.823991 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.823968 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-v8rfj\"" Apr 24 16:49:08.824132 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.824040 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 24 16:49:08.824132 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.824079 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 24 16:49:08.833002 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.832983 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn"] Apr 24 16:49:08.877025 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.877002 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:08.877133 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.877033 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:08.877133 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.877050 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9bkf\" (UniqueName: \"kubernetes.io/projected/c04dd1f9-be1c-4236-9fef-7909f0890b0d-kube-api-access-w9bkf\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:08.978171 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.978142 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:08.978171 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.978172 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9bkf\" (UniqueName: \"kubernetes.io/projected/c04dd1f9-be1c-4236-9fef-7909f0890b0d-kube-api-access-w9bkf\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:08.978363 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.978310 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:08.978507 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.978487 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:08.978615 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.978598 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:08.989027 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:08.989007 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9bkf\" (UniqueName: \"kubernetes.io/projected/c04dd1f9-be1c-4236-9fef-7909f0890b0d-kube-api-access-w9bkf\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:09.128225 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:09.128207 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:09.242797 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:09.242774 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn"] Apr 24 16:49:09.245352 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:49:09.245323 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc04dd1f9_be1c_4236_9fef_7909f0890b0d.slice/crio-5da63a1737952971a61f7b8968962b09cb273777d33955b42281edbbe13df6e9 WatchSource:0}: Error finding container 5da63a1737952971a61f7b8968962b09cb273777d33955b42281edbbe13df6e9: Status 404 returned error can't find the container with id 5da63a1737952971a61f7b8968962b09cb273777d33955b42281edbbe13df6e9 Apr 24 16:49:09.634277 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:09.634245 2578 generic.go:358] "Generic (PLEG): container finished" podID="c04dd1f9-be1c-4236-9fef-7909f0890b0d" containerID="5db7491bb1c34cc6da6237745136f56982919f504ede54e206dfe04b55bc881f" exitCode=0 Apr 24 16:49:09.634407 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:09.634300 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" event={"ID":"c04dd1f9-be1c-4236-9fef-7909f0890b0d","Type":"ContainerDied","Data":"5db7491bb1c34cc6da6237745136f56982919f504ede54e206dfe04b55bc881f"} Apr 24 16:49:09.634407 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:09.634321 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" event={"ID":"c04dd1f9-be1c-4236-9fef-7909f0890b0d","Type":"ContainerStarted","Data":"5da63a1737952971a61f7b8968962b09cb273777d33955b42281edbbe13df6e9"} Apr 24 16:49:10.638475 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:10.638433 2578 generic.go:358] "Generic (PLEG): container finished" podID="c04dd1f9-be1c-4236-9fef-7909f0890b0d" containerID="e241b433e7ef7a783b4580222080fe139493b5d13fcdba66a614f5a9d5756133" exitCode=0 Apr 24 16:49:10.638475 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:10.638478 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" event={"ID":"c04dd1f9-be1c-4236-9fef-7909f0890b0d","Type":"ContainerDied","Data":"e241b433e7ef7a783b4580222080fe139493b5d13fcdba66a614f5a9d5756133"} Apr 24 16:49:11.642719 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:11.642677 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" event={"ID":"c04dd1f9-be1c-4236-9fef-7909f0890b0d","Type":"ContainerStarted","Data":"648bbd91d6566d5df1fd98975690795dfc3948b2fc5739040c355d88be520517"} Apr 24 16:49:21.670945 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:21.670915 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn_c04dd1f9-be1c-4236-9fef-7909f0890b0d/extract/0.log" Apr 24 16:49:21.671619 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:21.671592 2578 generic.go:358] "Generic (PLEG): container finished" podID="c04dd1f9-be1c-4236-9fef-7909f0890b0d" containerID="648bbd91d6566d5df1fd98975690795dfc3948b2fc5739040c355d88be520517" exitCode=1 Apr 24 16:49:21.671676 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:21.671653 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" event={"ID":"c04dd1f9-be1c-4236-9fef-7909f0890b0d","Type":"ContainerDied","Data":"648bbd91d6566d5df1fd98975690795dfc3948b2fc5739040c355d88be520517"} Apr 24 16:49:22.793697 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.793677 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn_c04dd1f9-be1c-4236-9fef-7909f0890b0d/extract/0.log" Apr 24 16:49:22.794237 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.794220 2578 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:22.863599 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.863578 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-bundle\") pod \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " Apr 24 16:49:22.863674 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.863607 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-util\") pod \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " Apr 24 16:49:22.863674 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.863649 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9bkf\" (UniqueName: \"kubernetes.io/projected/c04dd1f9-be1c-4236-9fef-7909f0890b0d-kube-api-access-w9bkf\") pod \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\" (UID: \"c04dd1f9-be1c-4236-9fef-7909f0890b0d\") " Apr 24 16:49:22.864156 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.864125 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-bundle" (OuterVolumeSpecName: "bundle") pod "c04dd1f9-be1c-4236-9fef-7909f0890b0d" (UID: "c04dd1f9-be1c-4236-9fef-7909f0890b0d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 16:49:22.865567 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.865546 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04dd1f9-be1c-4236-9fef-7909f0890b0d-kube-api-access-w9bkf" (OuterVolumeSpecName: "kube-api-access-w9bkf") pod "c04dd1f9-be1c-4236-9fef-7909f0890b0d" (UID: "c04dd1f9-be1c-4236-9fef-7909f0890b0d"). InnerVolumeSpecName "kube-api-access-w9bkf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 16:49:22.870403 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.870369 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-util" (OuterVolumeSpecName: "util") pod "c04dd1f9-be1c-4236-9fef-7909f0890b0d" (UID: "c04dd1f9-be1c-4236-9fef-7909f0890b0d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 16:49:22.964033 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.963985 2578 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-bundle\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:49:22.964033 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.964005 2578 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c04dd1f9-be1c-4236-9fef-7909f0890b0d-util\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:49:22.964033 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:22.964015 2578 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w9bkf\" (UniqueName: \"kubernetes.io/projected/c04dd1f9-be1c-4236-9fef-7909f0890b0d-kube-api-access-w9bkf\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:49:23.678930 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:23.678900 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn_c04dd1f9-be1c-4236-9fef-7909f0890b0d/extract/0.log" Apr 24 16:49:23.679670 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:23.679642 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" event={"ID":"c04dd1f9-be1c-4236-9fef-7909f0890b0d","Type":"ContainerDied","Data":"5da63a1737952971a61f7b8968962b09cb273777d33955b42281edbbe13df6e9"} Apr 24 16:49:23.679765 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:23.679677 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5da63a1737952971a61f7b8968962b09cb273777d33955b42281edbbe13df6e9" Apr 24 16:49:23.679765 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:49:23.679723 2578 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn" Apr 24 16:49:28.152631 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:49:28.152533 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded" Apr 24 16:49:56.144447 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:49:56.144404 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:49:56.304451 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:49:56.304403 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:49:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:49:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:49:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:49:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"ip-10-0-129-204.ec2.internal\": Patch \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal/status?timeout=10s\": context deadline exceeded" Apr 24 16:50:06.145421 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:06.145374 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded" Apr 24 16:50:06.305449 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:06.305417 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-129-204.ec2.internal\": Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:50:06.671042 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.670991 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq"] Apr 24 16:50:06.671322 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.671305 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c04dd1f9-be1c-4236-9fef-7909f0890b0d" containerName="util" Apr 24 16:50:06.671365 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.671323 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04dd1f9-be1c-4236-9fef-7909f0890b0d" containerName="util" Apr 24 16:50:06.671365 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.671336 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c04dd1f9-be1c-4236-9fef-7909f0890b0d" containerName="extract" Apr 24 16:50:06.671365 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.671342 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04dd1f9-be1c-4236-9fef-7909f0890b0d" containerName="extract" Apr 24 16:50:06.671365 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.671351 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c04dd1f9-be1c-4236-9fef-7909f0890b0d" containerName="pull" Apr 24 16:50:06.671365 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.671357 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04dd1f9-be1c-4236-9fef-7909f0890b0d" containerName="pull" Apr 24 16:50:06.671514 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.671415 2578 memory_manager.go:356] "RemoveStaleState removing state" podUID="c04dd1f9-be1c-4236-9fef-7909f0890b0d" containerName="extract" Apr 24 16:50:06.674303 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.674282 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:06.675354 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:06.675322 2578 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ip-10-0-129-204.ec2.internal\": the object has been modified; please apply your changes to the latest version and try again" Apr 24 16:50:06.677687 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.677667 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-v8rfj\"" Apr 24 16:50:06.677827 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.677668 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 24 16:50:06.677827 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.677753 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 24 16:50:06.683951 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.683926 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq"] Apr 24 16:50:06.825346 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.825316 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:06.825460 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.825370 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps4qj\" (UniqueName: \"kubernetes.io/projected/0091ddbd-dce1-4750-8139-5f9002a33c2d-kube-api-access-ps4qj\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:06.825460 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.825418 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:06.926614 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.926556 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ps4qj\" (UniqueName: \"kubernetes.io/projected/0091ddbd-dce1-4750-8139-5f9002a33c2d-kube-api-access-ps4qj\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:06.926614 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.926582 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:06.926614 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.926605 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:06.926959 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.926940 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:06.927041 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.926996 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:06.935170 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.935149 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps4qj\" (UniqueName: \"kubernetes.io/projected/0091ddbd-dce1-4750-8139-5f9002a33c2d-kube-api-access-ps4qj\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:06.985028 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:06.984997 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:07.100775 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:07.100748 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq"] Apr 24 16:50:07.104042 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:50:07.104018 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0091ddbd_dce1_4750_8139_5f9002a33c2d.slice/crio-5df34b26538cf658014debcfbc3a56c20e9dfed9044fd2e4ea13eb72ec42f0a6 WatchSource:0}: Error finding container 5df34b26538cf658014debcfbc3a56c20e9dfed9044fd2e4ea13eb72ec42f0a6: Status 404 returned error can't find the container with id 5df34b26538cf658014debcfbc3a56c20e9dfed9044fd2e4ea13eb72ec42f0a6 Apr 24 16:50:07.801197 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:07.801163 2578 generic.go:358] "Generic (PLEG): container finished" podID="0091ddbd-dce1-4750-8139-5f9002a33c2d" containerID="a0709a4a9f980ed3c0949962ab2184ac1a2c2573ef1ea337b47704cbc5020b16" exitCode=0 Apr 24 16:50:07.801584 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:07.801255 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" event={"ID":"0091ddbd-dce1-4750-8139-5f9002a33c2d","Type":"ContainerDied","Data":"a0709a4a9f980ed3c0949962ab2184ac1a2c2573ef1ea337b47704cbc5020b16"} Apr 24 16:50:07.801584 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:07.801294 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" event={"ID":"0091ddbd-dce1-4750-8139-5f9002a33c2d","Type":"ContainerStarted","Data":"5df34b26538cf658014debcfbc3a56c20e9dfed9044fd2e4ea13eb72ec42f0a6"} Apr 24 16:50:08.805113 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:08.805080 2578 generic.go:358] "Generic (PLEG): container finished" podID="0091ddbd-dce1-4750-8139-5f9002a33c2d" containerID="f4a2a6abcd4e40e9f2e85682000ceb600bb8189254f1ebdfe635b66a75451a81" exitCode=0 Apr 24 16:50:08.805463 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:08.805118 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" event={"ID":"0091ddbd-dce1-4750-8139-5f9002a33c2d","Type":"ContainerDied","Data":"f4a2a6abcd4e40e9f2e85682000ceb600bb8189254f1ebdfe635b66a75451a81"} Apr 24 16:50:09.809851 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:09.809785 2578 generic.go:358] "Generic (PLEG): container finished" podID="0091ddbd-dce1-4750-8139-5f9002a33c2d" containerID="8b6f17c62a0db783c6dcc025218c55f17c769b0aedd584f22e90eba093e2217d" exitCode=0 Apr 24 16:50:09.810277 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:09.809857 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" event={"ID":"0091ddbd-dce1-4750-8139-5f9002a33c2d","Type":"ContainerDied","Data":"8b6f17c62a0db783c6dcc025218c55f17c769b0aedd584f22e90eba093e2217d"} Apr 24 16:50:10.927552 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:10.927529 2578 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:11.054261 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.054235 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-bundle\") pod \"0091ddbd-dce1-4750-8139-5f9002a33c2d\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " Apr 24 16:50:11.054368 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.054282 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps4qj\" (UniqueName: \"kubernetes.io/projected/0091ddbd-dce1-4750-8139-5f9002a33c2d-kube-api-access-ps4qj\") pod \"0091ddbd-dce1-4750-8139-5f9002a33c2d\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " Apr 24 16:50:11.054368 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.054297 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-util\") pod \"0091ddbd-dce1-4750-8139-5f9002a33c2d\" (UID: \"0091ddbd-dce1-4750-8139-5f9002a33c2d\") " Apr 24 16:50:11.054861 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.054794 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-bundle" (OuterVolumeSpecName: "bundle") pod "0091ddbd-dce1-4750-8139-5f9002a33c2d" (UID: "0091ddbd-dce1-4750-8139-5f9002a33c2d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 16:50:11.056227 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.056196 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0091ddbd-dce1-4750-8139-5f9002a33c2d-kube-api-access-ps4qj" (OuterVolumeSpecName: "kube-api-access-ps4qj") pod "0091ddbd-dce1-4750-8139-5f9002a33c2d" (UID: "0091ddbd-dce1-4750-8139-5f9002a33c2d"). InnerVolumeSpecName "kube-api-access-ps4qj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 16:50:11.059977 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.059958 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-util" (OuterVolumeSpecName: "util") pod "0091ddbd-dce1-4750-8139-5f9002a33c2d" (UID: "0091ddbd-dce1-4750-8139-5f9002a33c2d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 16:50:11.155350 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.155329 2578 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-bundle\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:50:11.155350 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.155349 2578 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ps4qj\" (UniqueName: \"kubernetes.io/projected/0091ddbd-dce1-4750-8139-5f9002a33c2d-kube-api-access-ps4qj\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:50:11.155469 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.155359 2578 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0091ddbd-dce1-4750-8139-5f9002a33c2d-util\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:50:11.817986 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.817955 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" event={"ID":"0091ddbd-dce1-4750-8139-5f9002a33c2d","Type":"ContainerDied","Data":"5df34b26538cf658014debcfbc3a56c20e9dfed9044fd2e4ea13eb72ec42f0a6"} Apr 24 16:50:11.817986 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.817990 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df34b26538cf658014debcfbc3a56c20e9dfed9044fd2e4ea13eb72ec42f0a6" Apr 24 16:50:11.818161 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:11.817987 2578 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cddlxq" Apr 24 16:50:16.801097 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:16.801064 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn_c04dd1f9-be1c-4236-9fef-7909f0890b0d/extract/0.log" Apr 24 16:50:16.802243 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:16.802213 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf_f16ae908-6ec0-41b9-bb9a-b98dc9223b7f/extract/0.log" Apr 24 16:50:16.802371 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:16.802293 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c6vpvn_c04dd1f9-be1c-4236-9fef-7909f0890b0d/extract/0.log" Apr 24 16:50:16.803381 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:16.803359 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cp2vwf_f16ae908-6ec0-41b9-bb9a-b98dc9223b7f/extract/0.log" Apr 24 16:50:16.819793 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:16.819773 2578 kubelet.go:1628] "Image garbage collection succeeded" Apr 24 16:50:26.892029 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:26.891986 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:50:42.515592 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.515554 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj"] Apr 24 16:50:42.516088 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.515798 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0091ddbd-dce1-4750-8139-5f9002a33c2d" containerName="pull" Apr 24 16:50:42.516088 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.515828 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="0091ddbd-dce1-4750-8139-5f9002a33c2d" containerName="pull" Apr 24 16:50:42.516088 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.515837 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0091ddbd-dce1-4750-8139-5f9002a33c2d" containerName="extract" Apr 24 16:50:42.516088 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.515843 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="0091ddbd-dce1-4750-8139-5f9002a33c2d" containerName="extract" Apr 24 16:50:42.516088 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.515858 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0091ddbd-dce1-4750-8139-5f9002a33c2d" containerName="util" Apr 24 16:50:42.516088 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.515863 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="0091ddbd-dce1-4750-8139-5f9002a33c2d" containerName="util" Apr 24 16:50:42.516088 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.515900 2578 memory_manager.go:356] "RemoveStaleState removing state" podUID="0091ddbd-dce1-4750-8139-5f9002a33c2d" containerName="extract" Apr 24 16:50:42.518565 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.518549 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:50:42.521045 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.521019 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 24 16:50:42.521181 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.521062 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 24 16:50:42.521181 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.521117 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"custom-metrics-autoscaler-operator-dockercfg-h88pk\"" Apr 24 16:50:42.522233 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.522207 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 24 16:50:42.528293 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.528272 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj"] Apr 24 16:50:42.646382 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.646356 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/86ead66c-d1c6-4b04-858c-9738a6b251b7-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj\" (UID: \"86ead66c-d1c6-4b04-858c-9738a6b251b7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:50:42.646502 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.646386 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbbl\" (UniqueName: \"kubernetes.io/projected/86ead66c-d1c6-4b04-858c-9738a6b251b7-kube-api-access-bfbbl\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj\" (UID: \"86ead66c-d1c6-4b04-858c-9738a6b251b7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:50:42.747027 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.746997 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/86ead66c-d1c6-4b04-858c-9738a6b251b7-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj\" (UID: \"86ead66c-d1c6-4b04-858c-9738a6b251b7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:50:42.747139 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.747032 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbbl\" (UniqueName: \"kubernetes.io/projected/86ead66c-d1c6-4b04-858c-9738a6b251b7-kube-api-access-bfbbl\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj\" (UID: \"86ead66c-d1c6-4b04-858c-9738a6b251b7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:50:42.749957 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.749924 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/86ead66c-d1c6-4b04-858c-9738a6b251b7-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj\" (UID: \"86ead66c-d1c6-4b04-858c-9738a6b251b7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:50:42.755905 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.755882 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbbl\" (UniqueName: \"kubernetes.io/projected/86ead66c-d1c6-4b04-858c-9738a6b251b7-kube-api-access-bfbbl\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj\" (UID: \"86ead66c-d1c6-4b04-858c-9738a6b251b7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:50:42.829479 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.829429 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:50:42.947614 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.947482 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj"] Apr 24 16:50:42.950181 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:50:42.950152 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86ead66c_d1c6_4b04_858c_9738a6b251b7.slice/crio-d5e8fbccaf3953936d6849c8590bd24686c6149c37d0cf8fdd46a4dfdf269d4c WatchSource:0}: Error finding container d5e8fbccaf3953936d6849c8590bd24686c6149c37d0cf8fdd46a4dfdf269d4c: Status 404 returned error can't find the container with id d5e8fbccaf3953936d6849c8590bd24686c6149c37d0cf8fdd46a4dfdf269d4c Apr 24 16:50:42.951739 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:42.951723 2578 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 16:50:43.909346 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:43.909245 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" event={"ID":"86ead66c-d1c6-4b04-858c-9738a6b251b7","Type":"ContainerStarted","Data":"d5e8fbccaf3953936d6849c8590bd24686c6149c37d0cf8fdd46a4dfdf269d4c"} Apr 24 16:50:46.498790 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.498760 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-x5bdf"] Apr 24 16:50:46.501663 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.501649 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:46.504158 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.504134 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-certs\"" Apr 24 16:50:46.504158 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.504150 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-45hrz\"" Apr 24 16:50:46.504384 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.504148 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 24 16:50:46.510233 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.510210 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-x5bdf"] Apr 24 16:50:46.571209 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.571183 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:46.571300 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.571216 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrwvg\" (UniqueName: \"kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-kube-api-access-nrwvg\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:46.571300 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.571235 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/beef5116-19de-4a87-9cd5-1504e8568da1-cabundle0\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:46.672313 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.672283 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:46.672431 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.672323 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrwvg\" (UniqueName: \"kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-kube-api-access-nrwvg\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:46.672431 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.672353 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/beef5116-19de-4a87-9cd5-1504e8568da1-cabundle0\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:46.672431 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:46.672418 2578 projected.go:264] Couldn't get secret openshift-keda/keda-operator-certs: secret "keda-operator-certs" not found Apr 24 16:50:46.672431 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:46.672434 2578 secret.go:281] references non-existent secret key: ca.crt Apr 24 16:50:46.672627 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:46.672442 2578 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 24 16:50:46.672627 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:46.672454 2578 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-x5bdf: [secret "keda-operator-certs" not found, references non-existent secret key: ca.crt] Apr 24 16:50:46.672627 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:46.672498 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates podName:beef5116-19de-4a87-9cd5-1504e8568da1 nodeName:}" failed. No retries permitted until 2026-04-24 16:50:47.172484833 +0000 UTC m=+330.768851096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates") pod "keda-operator-ffbb595cb-x5bdf" (UID: "beef5116-19de-4a87-9cd5-1504e8568da1") : [secret "keda-operator-certs" not found, references non-existent secret key: ca.crt] Apr 24 16:50:46.673156 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.673133 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/beef5116-19de-4a87-9cd5-1504e8568da1-cabundle0\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:46.684115 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.684098 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrwvg\" (UniqueName: \"kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-kube-api-access-nrwvg\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:46.811507 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.811436 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n"] Apr 24 16:50:46.814488 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.814473 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:46.817657 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.817637 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 24 16:50:46.831359 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.831339 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n"] Apr 24 16:50:46.874047 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.874020 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b79gh\" (UniqueName: \"kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-kube-api-access-b79gh\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:46.874147 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.874051 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:46.874147 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.874070 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/fef2f2cf-5919-4462-8357-7522e1c1559d-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:46.919988 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.919965 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:50:46.920089 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.919990 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" event={"ID":"86ead66c-d1c6-4b04-858c-9738a6b251b7","Type":"ContainerStarted","Data":"d3a02da74c8456ce543a01bcf2a4c97898a3c852903961775cd23e8a65e14b78"} Apr 24 16:50:46.974632 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.974599 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b79gh\" (UniqueName: \"kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-kube-api-access-b79gh\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:46.974768 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.974666 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:46.974924 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.974902 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/fef2f2cf-5919-4462-8357-7522e1c1559d-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:46.975237 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:46.975097 2578 secret.go:281] references non-existent secret key: tls.crt Apr 24 16:50:46.975237 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:46.975118 2578 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 24 16:50:46.975237 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:46.975138 2578 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n: references non-existent secret key: tls.crt Apr 24 16:50:46.975237 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:46.975204 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates podName:fef2f2cf-5919-4462-8357-7522e1c1559d nodeName:}" failed. No retries permitted until 2026-04-24 16:50:47.475187587 +0000 UTC m=+331.071553850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates") pod "keda-metrics-apiserver-7c9f485588-2v94n" (UID: "fef2f2cf-5919-4462-8357-7522e1c1559d") : references non-existent secret key: tls.crt Apr 24 16:50:46.975475 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.975256 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/fef2f2cf-5919-4462-8357-7522e1c1559d-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:46.976140 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.976106 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" podStartSLOduration=2.037881263 podStartE2EDuration="4.976096668s" podCreationTimestamp="2026-04-24 16:50:42 +0000 UTC" firstStartedPulling="2026-04-24 16:50:42.951857997 +0000 UTC m=+326.548224261" lastFinishedPulling="2026-04-24 16:50:45.890073399 +0000 UTC m=+329.486439666" observedRunningTime="2026-04-24 16:50:46.974392351 +0000 UTC m=+330.570758636" watchObservedRunningTime="2026-04-24 16:50:46.976096668 +0000 UTC m=+330.572462952" Apr 24 16:50:46.994040 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:46.994017 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b79gh\" (UniqueName: \"kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-kube-api-access-b79gh\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:47.092553 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.092490 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-bsqfb"] Apr 24 16:50:47.095490 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.095471 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-bsqfb" Apr 24 16:50:47.098485 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.098465 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 24 16:50:47.105961 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.105942 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-bsqfb"] Apr 24 16:50:47.176408 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.176384 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/9f4f695c-89bf-44e0-8dd5-eea473be3079-certificates\") pod \"keda-admission-cf49989db-bsqfb\" (UID: \"9f4f695c-89bf-44e0-8dd5-eea473be3079\") " pod="openshift-keda/keda-admission-cf49989db-bsqfb" Apr 24 16:50:47.176503 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.176418 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:47.176503 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.176441 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvn52\" (UniqueName: \"kubernetes.io/projected/9f4f695c-89bf-44e0-8dd5-eea473be3079-kube-api-access-fvn52\") pod \"keda-admission-cf49989db-bsqfb\" (UID: \"9f4f695c-89bf-44e0-8dd5-eea473be3079\") " pod="openshift-keda/keda-admission-cf49989db-bsqfb" Apr 24 16:50:47.176575 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:47.176520 2578 secret.go:281] references non-existent secret key: ca.crt Apr 24 16:50:47.176575 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:47.176537 2578 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 24 16:50:47.176575 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:47.176545 2578 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-x5bdf: references non-existent secret key: ca.crt Apr 24 16:50:47.176666 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:47.176587 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates podName:beef5116-19de-4a87-9cd5-1504e8568da1 nodeName:}" failed. No retries permitted until 2026-04-24 16:50:48.176574985 +0000 UTC m=+331.772941248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates") pod "keda-operator-ffbb595cb-x5bdf" (UID: "beef5116-19de-4a87-9cd5-1504e8568da1") : references non-existent secret key: ca.crt Apr 24 16:50:47.276783 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.276759 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/9f4f695c-89bf-44e0-8dd5-eea473be3079-certificates\") pod \"keda-admission-cf49989db-bsqfb\" (UID: \"9f4f695c-89bf-44e0-8dd5-eea473be3079\") " pod="openshift-keda/keda-admission-cf49989db-bsqfb" Apr 24 16:50:47.276937 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.276798 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fvn52\" (UniqueName: \"kubernetes.io/projected/9f4f695c-89bf-44e0-8dd5-eea473be3079-kube-api-access-fvn52\") pod \"keda-admission-cf49989db-bsqfb\" (UID: \"9f4f695c-89bf-44e0-8dd5-eea473be3079\") " pod="openshift-keda/keda-admission-cf49989db-bsqfb" Apr 24 16:50:47.279205 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.279184 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/9f4f695c-89bf-44e0-8dd5-eea473be3079-certificates\") pod \"keda-admission-cf49989db-bsqfb\" (UID: \"9f4f695c-89bf-44e0-8dd5-eea473be3079\") " pod="openshift-keda/keda-admission-cf49989db-bsqfb" Apr 24 16:50:47.292955 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.292936 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvn52\" (UniqueName: \"kubernetes.io/projected/9f4f695c-89bf-44e0-8dd5-eea473be3079-kube-api-access-fvn52\") pod \"keda-admission-cf49989db-bsqfb\" (UID: \"9f4f695c-89bf-44e0-8dd5-eea473be3079\") " pod="openshift-keda/keda-admission-cf49989db-bsqfb" Apr 24 16:50:47.406024 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.406005 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-bsqfb" Apr 24 16:50:47.478526 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.478494 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:47.478664 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:47.478648 2578 secret.go:281] references non-existent secret key: tls.crt Apr 24 16:50:47.478701 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:47.478667 2578 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 24 16:50:47.478701 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:47.478692 2578 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n: references non-existent secret key: tls.crt Apr 24 16:50:47.478785 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:47.478745 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates podName:fef2f2cf-5919-4462-8357-7522e1c1559d nodeName:}" failed. No retries permitted until 2026-04-24 16:50:48.47873125 +0000 UTC m=+332.075097513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates") pod "keda-metrics-apiserver-7c9f485588-2v94n" (UID: "fef2f2cf-5919-4462-8357-7522e1c1559d") : references non-existent secret key: tls.crt Apr 24 16:50:47.529695 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.529669 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-bsqfb"] Apr 24 16:50:47.534359 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:50:47.534329 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f4f695c_89bf_44e0_8dd5_eea473be3079.slice/crio-b7935e3ef8dd0d01381cb27a87d2b187752837040c19a21e1b59d91329fa538c WatchSource:0}: Error finding container b7935e3ef8dd0d01381cb27a87d2b187752837040c19a21e1b59d91329fa538c: Status 404 returned error can't find the container with id b7935e3ef8dd0d01381cb27a87d2b187752837040c19a21e1b59d91329fa538c Apr 24 16:50:47.922079 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:47.922049 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-bsqfb" event={"ID":"9f4f695c-89bf-44e0-8dd5-eea473be3079","Type":"ContainerStarted","Data":"b7935e3ef8dd0d01381cb27a87d2b187752837040c19a21e1b59d91329fa538c"} Apr 24 16:50:48.183764 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:48.183675 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:48.183923 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:48.183801 2578 secret.go:281] references non-existent secret key: ca.crt Apr 24 16:50:48.183923 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:48.183839 2578 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 24 16:50:48.183923 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:48.183851 2578 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-x5bdf: references non-existent secret key: ca.crt Apr 24 16:50:48.183923 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:48.183909 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates podName:beef5116-19de-4a87-9cd5-1504e8568da1 nodeName:}" failed. No retries permitted until 2026-04-24 16:50:50.183890062 +0000 UTC m=+333.780256325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates") pod "keda-operator-ffbb595cb-x5bdf" (UID: "beef5116-19de-4a87-9cd5-1504e8568da1") : references non-existent secret key: ca.crt Apr 24 16:50:48.485379 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:48.485296 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:48.485541 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:48.485420 2578 secret.go:281] references non-existent secret key: tls.crt Apr 24 16:50:48.485541 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:48.485433 2578 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 24 16:50:48.485541 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:48.485451 2578 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n: references non-existent secret key: tls.crt Apr 24 16:50:48.485541 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:50:48.485513 2578 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates podName:fef2f2cf-5919-4462-8357-7522e1c1559d nodeName:}" failed. No retries permitted until 2026-04-24 16:50:50.48549882 +0000 UTC m=+334.081865083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates") pod "keda-metrics-apiserver-7c9f485588-2v94n" (UID: "fef2f2cf-5919-4462-8357-7522e1c1559d") : references non-existent secret key: tls.crt Apr 24 16:50:49.929663 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:49.929633 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-bsqfb" event={"ID":"9f4f695c-89bf-44e0-8dd5-eea473be3079","Type":"ContainerStarted","Data":"66247695c097c0125c695c50d724041bc120f41c5acb43adf3aafad6cc18b338"} Apr 24 16:50:49.930070 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:49.929746 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-bsqfb" Apr 24 16:50:49.948146 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:49.948099 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-bsqfb" podStartSLOduration=1.626394839 podStartE2EDuration="2.948086982s" podCreationTimestamp="2026-04-24 16:50:47 +0000 UTC" firstStartedPulling="2026-04-24 16:50:47.535462173 +0000 UTC m=+331.131828436" lastFinishedPulling="2026-04-24 16:50:48.857154313 +0000 UTC m=+332.453520579" observedRunningTime="2026-04-24 16:50:49.946958537 +0000 UTC m=+333.543324822" watchObservedRunningTime="2026-04-24 16:50:49.948086982 +0000 UTC m=+333.544453266" Apr 24 16:50:50.196081 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:50.196002 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:50.198329 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:50.198303 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/beef5116-19de-4a87-9cd5-1504e8568da1-certificates\") pod \"keda-operator-ffbb595cb-x5bdf\" (UID: \"beef5116-19de-4a87-9cd5-1504e8568da1\") " pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:50.411494 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:50.411465 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:50:50.498482 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:50.498416 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:50.500738 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:50.500718 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fef2f2cf-5919-4462-8357-7522e1c1559d-certificates\") pod \"keda-metrics-apiserver-7c9f485588-2v94n\" (UID: \"fef2f2cf-5919-4462-8357-7522e1c1559d\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:50.536328 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:50.536310 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-x5bdf"] Apr 24 16:50:50.538378 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:50:50.538358 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeef5116_19de_4a87_9cd5_1504e8568da1.slice/crio-3b621d568c914fc56dc0d09f813eb93ef22e5006724000f4fdee05d4d4a19a80 WatchSource:0}: Error finding container 3b621d568c914fc56dc0d09f813eb93ef22e5006724000f4fdee05d4d4a19a80: Status 404 returned error can't find the container with id 3b621d568c914fc56dc0d09f813eb93ef22e5006724000f4fdee05d4d4a19a80 Apr 24 16:50:50.724187 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:50.724131 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:50:50.851017 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:50.850912 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n"] Apr 24 16:50:50.853379 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:50:50.853349 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfef2f2cf_5919_4462_8357_7522e1c1559d.slice/crio-540a059f969563b8aee91327a6d1aa4fd7d3d39917a2a837d61cfcbc9d3a674f WatchSource:0}: Error finding container 540a059f969563b8aee91327a6d1aa4fd7d3d39917a2a837d61cfcbc9d3a674f: Status 404 returned error can't find the container with id 540a059f969563b8aee91327a6d1aa4fd7d3d39917a2a837d61cfcbc9d3a674f Apr 24 16:50:50.933735 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:50.933704 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" event={"ID":"fef2f2cf-5919-4462-8357-7522e1c1559d","Type":"ContainerStarted","Data":"540a059f969563b8aee91327a6d1aa4fd7d3d39917a2a837d61cfcbc9d3a674f"} Apr 24 16:50:50.934639 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:50:50.934618 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" event={"ID":"beef5116-19de-4a87-9cd5-1504e8568da1","Type":"ContainerStarted","Data":"3b621d568c914fc56dc0d09f813eb93ef22e5006724000f4fdee05d4d4a19a80"} Apr 24 16:51:07.924569 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:07.924542 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:51:07.976745 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:07.976715 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" event={"ID":"fef2f2cf-5919-4462-8357-7522e1c1559d","Type":"ContainerStarted","Data":"13c725b13e5119f1b8c9c8f159c05f68547d1151c2361ace3fde3065f217ccc6"} Apr 24 16:51:07.976883 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:07.976857 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:51:08.001413 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:07.998511 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" podStartSLOduration=4.973583773 podStartE2EDuration="21.998491593s" podCreationTimestamp="2026-04-24 16:50:46 +0000 UTC" firstStartedPulling="2026-04-24 16:50:50.854735819 +0000 UTC m=+334.451102086" lastFinishedPulling="2026-04-24 16:51:07.879643633 +0000 UTC m=+351.476009906" observedRunningTime="2026-04-24 16:51:07.99589856 +0000 UTC m=+351.592264845" watchObservedRunningTime="2026-04-24 16:51:07.998491593 +0000 UTC m=+351.594857878" Apr 24 16:51:10.937366 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:10.937335 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-bsqfb" Apr 24 16:51:18.984289 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:18.984258 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-2v94n" Apr 24 16:51:28.029775 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:28.029721 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" event={"ID":"beef5116-19de-4a87-9cd5-1504e8568da1","Type":"ContainerStarted","Data":"7b99c67d3a61649c219ab9fa4de63863d1c7c6207dedcafec4b30d355af3940f"} Apr 24 16:51:28.030276 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:28.029836 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:51:28.048926 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:28.048858 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" podStartSLOduration=5.359300397 podStartE2EDuration="42.048842502s" podCreationTimestamp="2026-04-24 16:50:46 +0000 UTC" firstStartedPulling="2026-04-24 16:50:50.539792063 +0000 UTC m=+334.136158329" lastFinishedPulling="2026-04-24 16:51:27.22933417 +0000 UTC m=+370.825700434" observedRunningTime="2026-04-24 16:51:28.048403266 +0000 UTC m=+371.644769551" watchObservedRunningTime="2026-04-24 16:51:28.048842502 +0000 UTC m=+371.645208789" Apr 24 16:51:49.033600 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:49.033568 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:51:50.088862 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:50.088832 2578 generic.go:358] "Generic (PLEG): container finished" podID="beef5116-19de-4a87-9cd5-1504e8568da1" containerID="7b99c67d3a61649c219ab9fa4de63863d1c7c6207dedcafec4b30d355af3940f" exitCode=1 Apr 24 16:51:50.089304 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:50.088922 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" event={"ID":"beef5116-19de-4a87-9cd5-1504e8568da1","Type":"ContainerDied","Data":"7b99c67d3a61649c219ab9fa4de63863d1c7c6207dedcafec4b30d355af3940f"} Apr 24 16:51:50.089348 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:50.089337 2578 scope.go:117] "RemoveContainer" containerID="7b99c67d3a61649c219ab9fa4de63863d1c7c6207dedcafec4b30d355af3940f" Apr 24 16:51:50.411734 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:50.411707 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:51:51.093274 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:51.093243 2578 generic.go:358] "Generic (PLEG): container finished" podID="86ead66c-d1c6-4b04-858c-9738a6b251b7" containerID="d3a02da74c8456ce543a01bcf2a4c97898a3c852903961775cd23e8a65e14b78" exitCode=1 Apr 24 16:51:51.093594 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:51.093300 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" event={"ID":"86ead66c-d1c6-4b04-858c-9738a6b251b7","Type":"ContainerDied","Data":"d3a02da74c8456ce543a01bcf2a4c97898a3c852903961775cd23e8a65e14b78"} Apr 24 16:51:51.093637 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:51.093613 2578 scope.go:117] "RemoveContainer" containerID="d3a02da74c8456ce543a01bcf2a4c97898a3c852903961775cd23e8a65e14b78" Apr 24 16:51:52.830591 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:52.830549 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:51:54.102527 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:54.102447 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" event={"ID":"86ead66c-d1c6-4b04-858c-9738a6b251b7","Type":"ContainerStarted","Data":"6843efd3309f3b00b3ca95e4e96ea33730eaa77bd023696b5154639d5c2353cc"} Apr 24 16:51:54.102871 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:54.102566 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:51:59.032753 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:51:59.032725 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:52:00.118651 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:00.118574 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" event={"ID":"beef5116-19de-4a87-9cd5-1504e8568da1","Type":"ContainerStarted","Data":"cf3f3e2bd824f5b407c054d1c0d5791ba06a5c689ea540d547080ebb936ad2bc"} Apr 24 16:52:00.118996 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:00.118708 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:52:15.106710 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:15.106676 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:52:21.123480 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:21.123443 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:52:26.251497 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:52:26.251402 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:52:28.457056 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.457014 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg"] Apr 24 16:52:28.460125 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.460087 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-7f7fb4c66f-q6r6g"] Apr 24 16:52:28.460297 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.460261 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:52:28.463330 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.463302 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-86cc847c5c-5ht5z"] Apr 24 16:52:28.463515 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.463491 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:52:28.464015 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.463982 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-controller-manager-dockercfg-4g74g\"" Apr 24 16:52:28.464177 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.464056 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-webhook-server-cert\"" Apr 24 16:52:28.464177 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.463989 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 24 16:52:28.464177 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.464128 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 24 16:52:28.465772 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.465753 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 24 16:52:28.466469 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.466440 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7f7fb4c66f-q6r6g"] Apr 24 16:52:28.466469 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.466467 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg"] Apr 24 16:52:28.466575 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.466480 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-5ht5z"] Apr 24 16:52:28.466575 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.466559 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-5ht5z" Apr 24 16:52:28.467051 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.467030 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-kx4hm\"" Apr 24 16:52:28.468949 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.468927 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 24 16:52:28.469089 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.469004 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-d8k6p\"" Apr 24 16:52:28.521962 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.521939 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9caf8bd1-fec7-41b9-a6f4-b88775c03dab-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-qqqjg\" (UID: \"9caf8bd1-fec7-41b9-a6f4-b88775c03dab\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:52:28.522054 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.521975 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8xpd\" (UniqueName: \"kubernetes.io/projected/9caf8bd1-fec7-41b9-a6f4-b88775c03dab-kube-api-access-g8xpd\") pod \"llmisvc-controller-manager-68cc5db7c4-qqqjg\" (UID: \"9caf8bd1-fec7-41b9-a6f4-b88775c03dab\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:52:28.622712 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.622688 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9caf8bd1-fec7-41b9-a6f4-b88775c03dab-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-qqqjg\" (UID: \"9caf8bd1-fec7-41b9-a6f4-b88775c03dab\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:52:28.622801 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.622717 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c3037e0-7240-4f20-b277-66b756c6f9f7-cert\") pod \"kserve-controller-manager-7f7fb4c66f-q6r6g\" (UID: \"5c3037e0-7240-4f20-b277-66b756c6f9f7\") " pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:52:28.622801 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.622738 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/935a464b-8bb1-491c-871d-704a4406c97b-data\") pod \"seaweedfs-86cc847c5c-5ht5z\" (UID: \"935a464b-8bb1-491c-871d-704a4406c97b\") " pod="kserve/seaweedfs-86cc847c5c-5ht5z" Apr 24 16:52:28.622801 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.622755 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44fjl\" (UniqueName: \"kubernetes.io/projected/935a464b-8bb1-491c-871d-704a4406c97b-kube-api-access-44fjl\") pod \"seaweedfs-86cc847c5c-5ht5z\" (UID: \"935a464b-8bb1-491c-871d-704a4406c97b\") " pod="kserve/seaweedfs-86cc847c5c-5ht5z" Apr 24 16:52:28.622801 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.622778 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g8xpd\" (UniqueName: \"kubernetes.io/projected/9caf8bd1-fec7-41b9-a6f4-b88775c03dab-kube-api-access-g8xpd\") pod \"llmisvc-controller-manager-68cc5db7c4-qqqjg\" (UID: \"9caf8bd1-fec7-41b9-a6f4-b88775c03dab\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:52:28.622956 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.622832 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g5rs\" (UniqueName: \"kubernetes.io/projected/5c3037e0-7240-4f20-b277-66b756c6f9f7-kube-api-access-6g5rs\") pod \"kserve-controller-manager-7f7fb4c66f-q6r6g\" (UID: \"5c3037e0-7240-4f20-b277-66b756c6f9f7\") " pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:52:28.624953 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.624934 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9caf8bd1-fec7-41b9-a6f4-b88775c03dab-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-qqqjg\" (UID: \"9caf8bd1-fec7-41b9-a6f4-b88775c03dab\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:52:28.635643 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.635625 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8xpd\" (UniqueName: \"kubernetes.io/projected/9caf8bd1-fec7-41b9-a6f4-b88775c03dab-kube-api-access-g8xpd\") pod \"llmisvc-controller-manager-68cc5db7c4-qqqjg\" (UID: \"9caf8bd1-fec7-41b9-a6f4-b88775c03dab\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:52:28.723700 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.723651 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c3037e0-7240-4f20-b277-66b756c6f9f7-cert\") pod \"kserve-controller-manager-7f7fb4c66f-q6r6g\" (UID: \"5c3037e0-7240-4f20-b277-66b756c6f9f7\") " pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:52:28.723700 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.723681 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/935a464b-8bb1-491c-871d-704a4406c97b-data\") pod \"seaweedfs-86cc847c5c-5ht5z\" (UID: \"935a464b-8bb1-491c-871d-704a4406c97b\") " pod="kserve/seaweedfs-86cc847c5c-5ht5z" Apr 24 16:52:28.723700 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.723698 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-44fjl\" (UniqueName: \"kubernetes.io/projected/935a464b-8bb1-491c-871d-704a4406c97b-kube-api-access-44fjl\") pod \"seaweedfs-86cc847c5c-5ht5z\" (UID: \"935a464b-8bb1-491c-871d-704a4406c97b\") " pod="kserve/seaweedfs-86cc847c5c-5ht5z" Apr 24 16:52:28.723930 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.723844 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6g5rs\" (UniqueName: \"kubernetes.io/projected/5c3037e0-7240-4f20-b277-66b756c6f9f7-kube-api-access-6g5rs\") pod \"kserve-controller-manager-7f7fb4c66f-q6r6g\" (UID: \"5c3037e0-7240-4f20-b277-66b756c6f9f7\") " pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:52:28.724101 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.724083 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/935a464b-8bb1-491c-871d-704a4406c97b-data\") pod \"seaweedfs-86cc847c5c-5ht5z\" (UID: \"935a464b-8bb1-491c-871d-704a4406c97b\") " pod="kserve/seaweedfs-86cc847c5c-5ht5z" Apr 24 16:52:28.725647 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.725628 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c3037e0-7240-4f20-b277-66b756c6f9f7-cert\") pod \"kserve-controller-manager-7f7fb4c66f-q6r6g\" (UID: \"5c3037e0-7240-4f20-b277-66b756c6f9f7\") " pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:52:28.733438 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.733418 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g5rs\" (UniqueName: \"kubernetes.io/projected/5c3037e0-7240-4f20-b277-66b756c6f9f7-kube-api-access-6g5rs\") pod \"kserve-controller-manager-7f7fb4c66f-q6r6g\" (UID: \"5c3037e0-7240-4f20-b277-66b756c6f9f7\") " pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:52:28.733748 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.733731 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-44fjl\" (UniqueName: \"kubernetes.io/projected/935a464b-8bb1-491c-871d-704a4406c97b-kube-api-access-44fjl\") pod \"seaweedfs-86cc847c5c-5ht5z\" (UID: \"935a464b-8bb1-491c-871d-704a4406c97b\") " pod="kserve/seaweedfs-86cc847c5c-5ht5z" Apr 24 16:52:28.773198 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.773175 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:52:28.779900 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.779884 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:52:28.785484 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:28.785465 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-5ht5z" Apr 24 16:52:29.152207 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:29.152177 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg"] Apr 24 16:52:29.156052 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:52:29.156020 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9caf8bd1_fec7_41b9_a6f4_b88775c03dab.slice/crio-6950564119ef83ce7227c85cf7c31bacade815486e83b1c1944e0d07fba534e8 WatchSource:0}: Error finding container 6950564119ef83ce7227c85cf7c31bacade815486e83b1c1944e0d07fba534e8: Status 404 returned error can't find the container with id 6950564119ef83ce7227c85cf7c31bacade815486e83b1c1944e0d07fba534e8 Apr 24 16:52:29.195131 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:29.195097 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" event={"ID":"9caf8bd1-fec7-41b9-a6f4-b88775c03dab","Type":"ContainerStarted","Data":"6950564119ef83ce7227c85cf7c31bacade815486e83b1c1944e0d07fba534e8"} Apr 24 16:52:29.385962 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:29.385935 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-5ht5z"] Apr 24 16:52:29.387356 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:29.387336 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7f7fb4c66f-q6r6g"] Apr 24 16:52:29.426998 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:52:29.426972 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod935a464b_8bb1_491c_871d_704a4406c97b.slice/crio-8347c364bcbbc2138d9ee567bb11333c054800cdbaa88b7346f15bdae3a6d44e WatchSource:0}: Error finding container 8347c364bcbbc2138d9ee567bb11333c054800cdbaa88b7346f15bdae3a6d44e: Status 404 returned error can't find the container with id 8347c364bcbbc2138d9ee567bb11333c054800cdbaa88b7346f15bdae3a6d44e Apr 24 16:52:29.427647 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:52:29.427538 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c3037e0_7240_4f20_b277_66b756c6f9f7.slice/crio-77145122b693a17558581c8cd4df73f1ac0e484029f78ffc5957f660632abe42 WatchSource:0}: Error finding container 77145122b693a17558581c8cd4df73f1ac0e484029f78ffc5957f660632abe42: Status 404 returned error can't find the container with id 77145122b693a17558581c8cd4df73f1ac0e484029f78ffc5957f660632abe42 Apr 24 16:52:30.201154 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:30.201112 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-5ht5z" event={"ID":"935a464b-8bb1-491c-871d-704a4406c97b","Type":"ContainerStarted","Data":"8347c364bcbbc2138d9ee567bb11333c054800cdbaa88b7346f15bdae3a6d44e"} Apr 24 16:52:30.202459 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:30.202428 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" event={"ID":"5c3037e0-7240-4f20-b277-66b756c6f9f7","Type":"ContainerStarted","Data":"77145122b693a17558581c8cd4df73f1ac0e484029f78ffc5957f660632abe42"} Apr 24 16:52:34.218379 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:34.218344 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" event={"ID":"9caf8bd1-fec7-41b9-a6f4-b88775c03dab","Type":"ContainerStarted","Data":"f1988df12d4eee9022b37095cbb87892313c86e1621e43fe4d4cfd9e7697592a"} Apr 24 16:52:34.218785 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:34.218401 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:52:34.219689 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:34.219667 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-5ht5z" event={"ID":"935a464b-8bb1-491c-871d-704a4406c97b","Type":"ContainerStarted","Data":"71731b0311f9f3cd3ec25a40de1af1a9df5e49042242ac02622be1839b7d4c74"} Apr 24 16:52:34.219780 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:34.219759 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/seaweedfs-86cc847c5c-5ht5z" Apr 24 16:52:34.220921 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:34.220902 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" event={"ID":"5c3037e0-7240-4f20-b277-66b756c6f9f7","Type":"ContainerStarted","Data":"90f1c595af4683f702614b0b5b8bb7c920b1038755a1234d5d88335b2920f4a5"} Apr 24 16:52:34.221037 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:34.221023 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:52:34.237275 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:34.237238 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" podStartSLOduration=7.740267802 podStartE2EDuration="12.237227403s" podCreationTimestamp="2026-04-24 16:52:22 +0000 UTC" firstStartedPulling="2026-04-24 16:52:29.157278678 +0000 UTC m=+432.753644941" lastFinishedPulling="2026-04-24 16:52:33.654238275 +0000 UTC m=+437.250604542" observedRunningTime="2026-04-24 16:52:34.236640909 +0000 UTC m=+437.833007193" watchObservedRunningTime="2026-04-24 16:52:34.237227403 +0000 UTC m=+437.833593688" Apr 24 16:52:34.252792 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:34.252749 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" podStartSLOduration=7.990447232 podStartE2EDuration="12.252735816s" podCreationTimestamp="2026-04-24 16:52:22 +0000 UTC" firstStartedPulling="2026-04-24 16:52:29.428777951 +0000 UTC m=+433.025144213" lastFinishedPulling="2026-04-24 16:52:33.691066519 +0000 UTC m=+437.287432797" observedRunningTime="2026-04-24 16:52:34.252315889 +0000 UTC m=+437.848682173" watchObservedRunningTime="2026-04-24 16:52:34.252735816 +0000 UTC m=+437.849102103" Apr 24 16:52:34.275056 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:34.275015 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-86cc847c5c-5ht5z" podStartSLOduration=7.954919465 podStartE2EDuration="12.274996221s" podCreationTimestamp="2026-04-24 16:52:22 +0000 UTC" firstStartedPulling="2026-04-24 16:52:29.428377655 +0000 UTC m=+433.024743921" lastFinishedPulling="2026-04-24 16:52:33.748454411 +0000 UTC m=+437.344820677" observedRunningTime="2026-04-24 16:52:34.274500985 +0000 UTC m=+437.870867269" watchObservedRunningTime="2026-04-24 16:52:34.274996221 +0000 UTC m=+437.871362527" Apr 24 16:52:40.226006 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:40.225973 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/seaweedfs-86cc847c5c-5ht5z" Apr 24 16:52:50.267029 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:50.266962 2578 generic.go:358] "Generic (PLEG): container finished" podID="5c3037e0-7240-4f20-b277-66b756c6f9f7" containerID="90f1c595af4683f702614b0b5b8bb7c920b1038755a1234d5d88335b2920f4a5" exitCode=1 Apr 24 16:52:50.267318 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:50.267034 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" event={"ID":"5c3037e0-7240-4f20-b277-66b756c6f9f7","Type":"ContainerDied","Data":"90f1c595af4683f702614b0b5b8bb7c920b1038755a1234d5d88335b2920f4a5"} Apr 24 16:52:50.267429 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:50.267413 2578 scope.go:117] "RemoveContainer" containerID="90f1c595af4683f702614b0b5b8bb7c920b1038755a1234d5d88335b2920f4a5" Apr 24 16:52:50.268833 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:50.268794 2578 generic.go:358] "Generic (PLEG): container finished" podID="beef5116-19de-4a87-9cd5-1504e8568da1" containerID="cf3f3e2bd824f5b407c054d1c0d5791ba06a5c689ea540d547080ebb936ad2bc" exitCode=1 Apr 24 16:52:50.268958 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:50.268846 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" event={"ID":"beef5116-19de-4a87-9cd5-1504e8568da1","Type":"ContainerDied","Data":"cf3f3e2bd824f5b407c054d1c0d5791ba06a5c689ea540d547080ebb936ad2bc"} Apr 24 16:52:50.268958 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:50.268883 2578 scope.go:117] "RemoveContainer" containerID="7b99c67d3a61649c219ab9fa4de63863d1c7c6207dedcafec4b30d355af3940f" Apr 24 16:52:50.269214 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:50.269194 2578 scope.go:117] "RemoveContainer" containerID="cf3f3e2bd824f5b407c054d1c0d5791ba06a5c689ea540d547080ebb936ad2bc" Apr 24 16:52:50.269427 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:52:50.269406 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keda-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=keda-operator pod=keda-operator-ffbb595cb-x5bdf_openshift-keda(beef5116-19de-4a87-9cd5-1504e8568da1)\"" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" podUID="beef5116-19de-4a87-9cd5-1504e8568da1" Apr 24 16:52:50.411936 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:50.411911 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:52:51.121440 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:51.121419 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:52:51.272785 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:51.272761 2578 scope.go:117] "RemoveContainer" containerID="cf3f3e2bd824f5b407c054d1c0d5791ba06a5c689ea540d547080ebb936ad2bc" Apr 24 16:52:51.273192 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:52:51.273006 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keda-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=keda-operator pod=keda-operator-ffbb595cb-x5bdf_openshift-keda(beef5116-19de-4a87-9cd5-1504e8568da1)\"" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" podUID="beef5116-19de-4a87-9cd5-1504e8568da1" Apr 24 16:52:51.274320 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:51.274297 2578 generic.go:358] "Generic (PLEG): container finished" podID="86ead66c-d1c6-4b04-858c-9738a6b251b7" containerID="6843efd3309f3b00b3ca95e4e96ea33730eaa77bd023696b5154639d5c2353cc" exitCode=1 Apr 24 16:52:51.274428 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:51.274362 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" event={"ID":"86ead66c-d1c6-4b04-858c-9738a6b251b7","Type":"ContainerDied","Data":"6843efd3309f3b00b3ca95e4e96ea33730eaa77bd023696b5154639d5c2353cc"} Apr 24 16:52:51.274428 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:51.274407 2578 scope.go:117] "RemoveContainer" containerID="d3a02da74c8456ce543a01bcf2a4c97898a3c852903961775cd23e8a65e14b78" Apr 24 16:52:51.274694 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:51.274673 2578 scope.go:117] "RemoveContainer" containerID="6843efd3309f3b00b3ca95e4e96ea33730eaa77bd023696b5154639d5c2353cc" Apr 24 16:52:51.274878 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:52:51.274857 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"custom-metrics-autoscaler-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=custom-metrics-autoscaler-operator pod=custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj_openshift-keda(86ead66c-d1c6-4b04-858c-9738a6b251b7)\"" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" podUID="86ead66c-d1c6-4b04-858c-9738a6b251b7" Apr 24 16:52:51.275829 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:51.275791 2578 generic.go:358] "Generic (PLEG): container finished" podID="9caf8bd1-fec7-41b9-a6f4-b88775c03dab" containerID="f1988df12d4eee9022b37095cbb87892313c86e1621e43fe4d4cfd9e7697592a" exitCode=1 Apr 24 16:52:51.275900 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:51.275875 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" event={"ID":"9caf8bd1-fec7-41b9-a6f4-b88775c03dab","Type":"ContainerDied","Data":"f1988df12d4eee9022b37095cbb87892313c86e1621e43fe4d4cfd9e7697592a"} Apr 24 16:52:51.276212 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:51.276197 2578 scope.go:117] "RemoveContainer" containerID="f1988df12d4eee9022b37095cbb87892313c86e1621e43fe4d4cfd9e7697592a" Apr 24 16:52:52.280724 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:52.280702 2578 scope.go:117] "RemoveContainer" containerID="cf3f3e2bd824f5b407c054d1c0d5791ba06a5c689ea540d547080ebb936ad2bc" Apr 24 16:52:52.281087 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:52:52.280877 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keda-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=keda-operator pod=keda-operator-ffbb595cb-x5bdf_openshift-keda(beef5116-19de-4a87-9cd5-1504e8568da1)\"" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" podUID="beef5116-19de-4a87-9cd5-1504e8568da1" Apr 24 16:52:52.829964 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:52.829937 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:52:52.830243 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:52.830228 2578 scope.go:117] "RemoveContainer" containerID="6843efd3309f3b00b3ca95e4e96ea33730eaa77bd023696b5154639d5c2353cc" Apr 24 16:52:52.830406 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:52:52.830386 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"custom-metrics-autoscaler-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=custom-metrics-autoscaler-operator pod=custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj_openshift-keda(86ead66c-d1c6-4b04-858c-9738a6b251b7)\"" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" podUID="86ead66c-d1c6-4b04-858c-9738a6b251b7" Apr 24 16:52:55.105601 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:55.105567 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:52:55.106031 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:55.105879 2578 scope.go:117] "RemoveContainer" containerID="6843efd3309f3b00b3ca95e4e96ea33730eaa77bd023696b5154639d5c2353cc" Apr 24 16:52:55.106076 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:52:55.106039 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"custom-metrics-autoscaler-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=custom-metrics-autoscaler-operator pod=custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj_openshift-keda(86ead66c-d1c6-4b04-858c-9738a6b251b7)\"" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" podUID="86ead66c-d1c6-4b04-858c-9738a6b251b7" Apr 24 16:52:57.034648 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:52:57.034504 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:52:47Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:52:47Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:52:47Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:52:47Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21aab62140b42b6dc9b5c8143084d89ee3e938eba8811eb0479fc2b6ad6bbd6e\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0\\\"],\\\"sizeBytes\\\":1592330346},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da4bec2f08680a3155ddcbb96f8594244976dae6fc08fc0f5878c4b0a456b92e\\\"],\\\"sizeBytes\\\":1267137864},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ff861a4f4064f34ed8215c549b58ea833762ff00985f897190743095344c8b2\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1a64699b0d35f7d206a46217f6b854077ea5e4524b566ded00c64cc85d4c1be\\\"],\\\"sizeBytes\\\":1065600018},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99acb485c40736a41dca54d0a983d561e9f0cd87b0a3256d1e5ce0e0d45174b6\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1fc1fcb9645517ab568f2e99b25ded04cfb3971b75bf72188b75347d2808c7b\\\"],\\\"sizeBytes\\\":1065006420},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4ad1f767f2f48a2db76b34811c21cb04afb68e95ef143d2061869deea627a11a\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a3a48b734b960f0231b8efb31ec3c63e746255e8d9879e908af02332df60533d\\\"],\\\"sizeBytes\\\":977364430},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:edd7b883364dcfd9a811079ba1b6106d36063c1dce522a7602a646fc54160570\\\"],\\\"sizeBytes\\\":974678236},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:469446113dc27d84c040c66620f3bbb42aa8aeee7bb3a0a6b6cb374aa5b386ba\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f596c54e96ab5a345df7a8cf1a14c953d39b3b43423c6b3002ba98df2c2fd0a2\\\"],\\\"sizeBytes\\\":884076775},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:87dabed0efcf4f363bbd86487833d817b60cae8e78db0a091305001f3040ea4b\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a86fd02d09596be124562146df9ab5bf33cd7cdfde29f701524b250a0e8beec0\\\"],\\\"sizeBytes\\\":753864795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77f7e33000484395db2216d431d7f91158cbb0ddee564b65288f4ec47e3188b8\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:786bf1f34d3636f95860ebe748f9dc62b84102c612a5b21ae6750c52e9eea253\\\"],\\\"sizeBytes\\\":727300480},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96316433550661db3ef74c1200d3edc0ec9d0b87f2b41589aa7b5e841b6660e3\\\"],\\\"sizeBytes\\\":701151772},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:321285dca5a2f9b911e21badd1e51ce49841ddc45c5c859b3a29f7982d7376cb\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c530f8874aa89acf6d1834480b89067db882a7a0706e37c8fd9539a4401fdff0\\\"],\\\"sizeBytes\\\":644526840},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24a4540aecd65dc2af9b2023150dfb2d385169654f781efe70df51c623076d78\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8\\\"],\\\"sizeBytes\\\":534708291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82dc461ff286831f7476efc8de45fd918b894d4a80d9c285e9a9141fe43b993b\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7e28d9938f33314269d8079649373b4befb36895b8636c4c2ec3c0fee47c91c\\\"],\\\"sizeBytes\\\":533474192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abb4d487159fbc9b7148c690cd3a6ee638680f8f879ff6195ca1be5b393705b0\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c98201142213b52a3c1909f45800b5974157672377ecb8c102621ef164337008\\\"],\\\"sizeBytes\\\":514965743},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f7bf484ae9370ade47453d2e8dd49774694efed83f8431453db8965f642e63b\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98c903cc8c2ea492f4c9047febaa42525c20c5b414d4de2ce3df5eb65ef899e2\\\"],\\\"sizeBytes\\\":514858876},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d97bd10b7c241845d0ed15e34f8d45e82126c1f184316dea148ffabc1cd670a\\\"],\\\"sizeBytes\\\":488332864},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:693650db31be5a14163035ec50174ac9b8d664d327d538eeb3e0c131e16f88c0\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c862e4b31529b530f930a8d0e2a75b53d2092f392a29d037d0312169e1d4a1ac\\\"],\\\"sizeBytes\\\":480938200},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e5d90e04210b2195777322c3270bbeb4397c72a84b5945ccccbb258ed770fb\\\"],\\\"sizeBytes\\\":480736321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c12cfce95cb7b52d129eec71dec91a2cd7e820895ce1b9a111d68e3e3909aa78\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce654d8c5680faaa440b4a68965a0a29cfc189b82420004440da6762273538b2\\\"],\\\"sizeBytes\\\":480669231},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9b333f47df911dd05a9a10cb8ea988e97d3174490b348b8e46a161d9581d64\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:38b41ae697f031205813679347380d7f258be2a57902ad4494285782a241086b\\\"],\\\"sizeBytes\\\":474198918},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c1b871a1e7148de8d1101e925186df33318adc5adffbaba3f2f13af71b08367\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\\\"],\\\"sizeBytes\\\":468435751},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4d914876eb0cd2cf9c345582cdc1a5cf4803a5850ee766b875b8877b5c776df9\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58fb4327e682995ba0eee3fff6d4f4e228f8c4ff1d08b2822b54dee2db41ebf8\\\"],\\\"sizeBytes\\\":450507899},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f7b139fc67972daf070411a2137da81f179d753ddaafa8d3c791165a9564dff\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49ae6b645f147135aff6d0f464fcd64972abf57733f027318ca7376691eece01\\\"],\\\"sizeBytes\\\":426505480},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:165f05fdd7b633269db2465df57b674feec3a050388e931c6a481546e7b63ae9\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:232019e2eb8e13139570277b223ff822086fa83edc73958cbf919d6b57118068\\\"],\\\"sizeBytes\\\":426337527},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d87b9fedbc92cc502b5f435d9d5798507256bad49eda2040ac3645623616b5f5\\\"],\\\"sizeBytes\\\":420585449},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91034a2e8fa729a060ad18831a3c6e5de5d2b7b3de437b198ddc24fcb724dcf6\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f85524150d750c02366f1cff4380fbe657bea321e18b6f2c12c16153bae7e0\\\"],\\\"sizeBytes\\\":412926967},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:18d788b8deb049d24dfdb101371f6d2211a5e731bacd64b08adb97f66b6c4eb1\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43d7e5fe91598427c1fff01aac179d8add7051f71a53a126648cd68ae5d2435f\\\"],\\\"sizeBytes\\\":408523640},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69239195f3911c73a84a911eed79c9d51d0a896f5f3405f8511f52738740d044\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f3fc18a785145e05d543030637e66298ab943dae18b52aa448f6023bd0d8151\\\"],\\\"sizeBytes\\\":405607150},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76a0009edee9ca69023b834b7eff2d2885fc5d8744dc34a058abc09ca6e45518\\\",\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715\\\"],\\\"sizeBytes\\\":396599503},{\\\"names\\\":[\\\"registry.redhat.io/custom-metrics-autoscaler/custom-metrics-autoscaler-rhel9-operator@sha256:1a99333dd543488726051028e58eea4eaf5585a5993264faffbb7ccc151fc83e\\\",\\\"registry.redhat.io/custom-metrics-autoscaler/custom-metrics-autoscaler-rhel9-operator@sha256:d358d98c0cda5100147ad67a34b3ee19709cb0be33040eb89d17e57ee46b8542\\\"],\\\"sizeBytes\\\":384641915},{\\\"names\\\":[\\\"registry.redhat.io/custom-metrics-autoscaler/custom-metrics-autoscaler-rhel9@sha256:a02f9d2ff968196d532f9ca1858ec1ea3ca81726f111df22cf28bb6d7818f2ca\\\",\\\"registry.redhat.io/custom-metrics-autoscaler/custom-metrics-autoscaler-rhel9@sha256:de10b1c57fcb81289bf9e0093767f2ee93dd60f5680a71f8c87d941045d9d4ed\\\"],\\\"sizeBytes\\\":352937380},{\\\"names\\\":[\\\"registry.redhat.io/custom-metrics-autoscaler/custom-metrics-autoscaler-adapter-rhel9@sha256:6a2c4a1a0ec29fc1756c3b03a572333711a665cc0152e02e97ced94af3adef0d\\\",\\\"registry.redhat.io/custom-metrics-autoscaler/custom-metrics-autoscaler-adapter-rhel9@sha256:7f9e53e5a6aa6670ab29f6e0001326f75393a17efb515bf174d4c9515c152758\\\"],\\\"sizeBytes\\\":284673605},{\\\"names\\\":[\\\"quay.io/opendatahub/kserve-controller@sha256:8d9ceff674e9837d292fc58848f1b85264c8a22fd8bcc2277b524259e0614218\\\",\\\"quay.io/opendatahub/kserve-controller@sha256:8dc144dff750ffb3c025ec5d2e9e647d7c91556faee09a6833133ade4c98695e\\\"],\\\"sizeBytes\\\":239824345},{\\\"names\\\":[\\\"docker.io/chrislusf/seaweedfs@sha256:10fa7df90911dd83439f4d3d792a1c5c6c630121cb2094ba209f42d4b0ca975d\\\",\\\"docker.io/chrislusf/seaweedfs@sha256:a27e9c432f1dfaafb5bb3f5b39065c0df7a15423a3894025c714b3f06f998aeb\\\"],\\\"sizeBytes\\\":195922878},{\\\"names\\\":[\\\"registry.redhat.io/custom-metrics-autoscaler/custom-metrics-autoscaler-admission-webhooks-rhel9@sha256:53a8ffbe94da6658c66bcf8d85e2e113f7bd85cff3f42a258e5ce6662ec2cb1d\\\",\\\"registry.redhat.io/custom-metrics-autoscaler/custom-metrics-autoscaler-admission-webhooks-rhel9@sha256:f91bb1511831f42c24ed54a08e6109f50c56961d1aa62a02766355cb2a77e7f6\\\"],\\\"sizeBytes\\\":171341307},{\\\"names\\\":[\\\"ghcr.io/opendatahub-io/kserve/odh-kserve-llmisvc-controller@sha256:5569472d5499401bb4e422f9001a8c08eb6e06f4efe12f1e815a48478d5f46a9\\\",\\\"ghcr.io/opendatahub-io/kserve/odh-kserve-llmisvc-controller@sha256:d6a7658df2f6b5c7653fada85011d6cad148e5eb90493ea7186f60d9f9c62893\\\",\\\"ghcr.io/opendatahub-io/kserve/odh-kserve-llmisvc-controller:release-v0.17\\\"],\\\"sizeBytes\\\":133391167},{\\\"names\\\":[\\\"registry.redhat.io/custom-metrics-autoscaler/custom-metrics-autoscaler-operator-bundle@sha256:e746b1aafcdcd82a6d2d069478d2870ada48c9f026d3119fc0977b333138c4ba\\\"],\\\"sizeBytes\\\":108540851}]}}\" for node \"ip-10-0-129-204.ec2.internal\": Patch \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal/status?timeout=10s\": context deadline exceeded" Apr 24 16:52:58.773683 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:58.773640 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:52:58.780877 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:52:58.780854 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:52:59.063771 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:52:59.063703 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:53:00.304701 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:00.304670 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" event={"ID":"5c3037e0-7240-4f20-b277-66b756c6f9f7","Type":"ContainerStarted","Data":"e2807b53ae9de70b3445c8833ef2ab63e9eeda1ed427365aad149a5dded42aaa"} Apr 24 16:53:00.305029 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:00.304753 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:53:01.309242 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:01.309206 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" event={"ID":"9caf8bd1-fec7-41b9-a6f4-b88775c03dab","Type":"ContainerStarted","Data":"75469d675a65b96215d70e5997bdcdb8e5dc4b92cb98acacca2be91f72f1a5a0"} Apr 24 16:53:01.309660 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:01.309280 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:53:06.917255 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:06.917227 2578 scope.go:117] "RemoveContainer" containerID="cf3f3e2bd824f5b407c054d1c0d5791ba06a5c689ea540d547080ebb936ad2bc" Apr 24 16:53:08.329373 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:08.329342 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" event={"ID":"beef5116-19de-4a87-9cd5-1504e8568da1","Type":"ContainerStarted","Data":"eaf910d93103faccc08ae169389e564d55eead41c115efff36e792db5d1c9407"} Apr 24 16:53:08.329758 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:08.329542 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:53:08.916827 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:08.916769 2578 scope.go:117] "RemoveContainer" containerID="6843efd3309f3b00b3ca95e4e96ea33730eaa77bd023696b5154639d5c2353cc" Apr 24 16:53:27.130323 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:27.130272 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded" Apr 24 16:53:29.333494 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:29.333461 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:53:30.450450 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:30.447373 2578 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ip-10-0-129-204.ec2.internal\": the object has been modified; please apply your changes to the latest version and try again" Apr 24 16:53:31.314296 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:31.314262 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:53:32.313673 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:32.313648 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:53:36.414351 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:36.414323 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" event={"ID":"86ead66c-d1c6-4b04-858c-9738a6b251b7","Type":"ContainerStarted","Data":"a20c0dae1f6fc44bc07ca2ee6c57849e2e8337647b995b8833b0301e42c7b9f7"} Apr 24 16:53:36.414683 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:36.414526 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:53:50.412570 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:50.412534 2578 patch_prober.go:28] interesting pod/keda-operator-ffbb595cb-x5bdf container/keda-operator namespace/openshift-keda: Liveness probe status=failure output="Get \"http://10.134.0.13:8081/healthz\": dial tcp 10.134.0.13:8081: connect: connection refused" start-of-body= Apr 24 16:53:50.412897 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:50.412605 2578 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" podUID="beef5116-19de-4a87-9cd5-1504e8568da1" containerName="keda-operator" probeResult="failure" output="Get \"http://10.134.0.13:8081/healthz\": dial tcp 10.134.0.13:8081: connect: connection refused" Apr 24 16:53:50.460072 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:50.460044 2578 generic.go:358] "Generic (PLEG): container finished" podID="5c3037e0-7240-4f20-b277-66b756c6f9f7" containerID="e2807b53ae9de70b3445c8833ef2ab63e9eeda1ed427365aad149a5dded42aaa" exitCode=1 Apr 24 16:53:50.460183 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:50.460126 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" event={"ID":"5c3037e0-7240-4f20-b277-66b756c6f9f7","Type":"ContainerDied","Data":"e2807b53ae9de70b3445c8833ef2ab63e9eeda1ed427365aad149a5dded42aaa"} Apr 24 16:53:50.460250 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:50.460183 2578 scope.go:117] "RemoveContainer" containerID="90f1c595af4683f702614b0b5b8bb7c920b1038755a1234d5d88335b2920f4a5" Apr 24 16:53:50.460566 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:50.460543 2578 scope.go:117] "RemoveContainer" containerID="e2807b53ae9de70b3445c8833ef2ab63e9eeda1ed427365aad149a5dded42aaa" Apr 24 16:53:50.460794 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:50.460774 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=kserve-controller-manager-7f7fb4c66f-q6r6g_kserve(5c3037e0-7240-4f20-b277-66b756c6f9f7)\"" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" podUID="5c3037e0-7240-4f20-b277-66b756c6f9f7" Apr 24 16:53:50.462093 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:50.462068 2578 generic.go:358] "Generic (PLEG): container finished" podID="beef5116-19de-4a87-9cd5-1504e8568da1" containerID="eaf910d93103faccc08ae169389e564d55eead41c115efff36e792db5d1c9407" exitCode=1 Apr 24 16:53:50.462192 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:50.462145 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" event={"ID":"beef5116-19de-4a87-9cd5-1504e8568da1","Type":"ContainerDied","Data":"eaf910d93103faccc08ae169389e564d55eead41c115efff36e792db5d1c9407"} Apr 24 16:53:50.462428 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:50.462415 2578 scope.go:117] "RemoveContainer" containerID="eaf910d93103faccc08ae169389e564d55eead41c115efff36e792db5d1c9407" Apr 24 16:53:50.462584 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:50.462568 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keda-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=keda-operator pod=keda-operator-ffbb595cb-x5bdf_openshift-keda(beef5116-19de-4a87-9cd5-1504e8568da1)\"" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" podUID="beef5116-19de-4a87-9cd5-1504e8568da1" Apr 24 16:53:50.470307 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:50.470286 2578 scope.go:117] "RemoveContainer" containerID="cf3f3e2bd824f5b407c054d1c0d5791ba06a5c689ea540d547080ebb936ad2bc" Apr 24 16:53:50.677292 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:50.677211 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:53:40Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:53:40Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:53:40Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-24T16:53:40Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"ip-10-0-129-204.ec2.internal\": Patch \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal/status?timeout=10s\": context deadline exceeded" Apr 24 16:53:50.829061 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:50.829024 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": context deadline exceeded" Apr 24 16:53:51.310517 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:51.310491 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:53:51.466306 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:51.466272 2578 generic.go:358] "Generic (PLEG): container finished" podID="9caf8bd1-fec7-41b9-a6f4-b88775c03dab" containerID="75469d675a65b96215d70e5997bdcdb8e5dc4b92cb98acacca2be91f72f1a5a0" exitCode=1 Apr 24 16:53:51.466690 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:51.466345 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" event={"ID":"9caf8bd1-fec7-41b9-a6f4-b88775c03dab","Type":"ContainerDied","Data":"75469d675a65b96215d70e5997bdcdb8e5dc4b92cb98acacca2be91f72f1a5a0"} Apr 24 16:53:51.466690 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:51.466394 2578 scope.go:117] "RemoveContainer" containerID="f1988df12d4eee9022b37095cbb87892313c86e1621e43fe4d4cfd9e7697592a" Apr 24 16:53:51.466830 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:51.466694 2578 scope.go:117] "RemoveContainer" containerID="75469d675a65b96215d70e5997bdcdb8e5dc4b92cb98acacca2be91f72f1a5a0" Apr 24 16:53:51.466953 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:51.466909 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=llmisvc-controller-manager-68cc5db7c4-qqqjg_kserve(9caf8bd1-fec7-41b9-a6f4-b88775c03dab)\"" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" podUID="9caf8bd1-fec7-41b9-a6f4-b88775c03dab" Apr 24 16:53:51.468285 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:51.468261 2578 scope.go:117] "RemoveContainer" containerID="e2807b53ae9de70b3445c8833ef2ab63e9eeda1ed427365aad149a5dded42aaa" Apr 24 16:53:51.468451 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:51.468433 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=kserve-controller-manager-7f7fb4c66f-q6r6g_kserve(5c3037e0-7240-4f20-b277-66b756c6f9f7)\"" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" podUID="5c3037e0-7240-4f20-b277-66b756c6f9f7" Apr 24 16:53:52.312626 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:52.312603 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:53:52.473625 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:52.473601 2578 scope.go:117] "RemoveContainer" containerID="75469d675a65b96215d70e5997bdcdb8e5dc4b92cb98acacca2be91f72f1a5a0" Apr 24 16:53:52.473933 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:52.473748 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=llmisvc-controller-manager-68cc5db7c4-qqqjg_kserve(9caf8bd1-fec7-41b9-a6f4-b88775c03dab)\"" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" podUID="9caf8bd1-fec7-41b9-a6f4-b88775c03dab" Apr 24 16:53:57.419991 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:57.419901 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:53:58.773517 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:58.773482 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:53:58.773938 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:58.773881 2578 scope.go:117] "RemoveContainer" containerID="75469d675a65b96215d70e5997bdcdb8e5dc4b92cb98acacca2be91f72f1a5a0" Apr 24 16:53:58.774072 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:58.774052 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=llmisvc-controller-manager-68cc5db7c4-qqqjg_kserve(9caf8bd1-fec7-41b9-a6f4-b88775c03dab)\"" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" podUID="9caf8bd1-fec7-41b9-a6f4-b88775c03dab" Apr 24 16:53:58.780040 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:58.780022 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:53:58.780355 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:58.780340 2578 scope.go:117] "RemoveContainer" containerID="e2807b53ae9de70b3445c8833ef2ab63e9eeda1ed427365aad149a5dded42aaa" Apr 24 16:53:58.780503 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:58.780484 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=kserve-controller-manager-7f7fb4c66f-q6r6g_kserve(5c3037e0-7240-4f20-b277-66b756c6f9f7)\"" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" podUID="5c3037e0-7240-4f20-b277-66b756c6f9f7" Apr 24 16:53:59.332771 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:59.332730 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:53:59.333163 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:53:59.333142 2578 scope.go:117] "RemoveContainer" containerID="eaf910d93103faccc08ae169389e564d55eead41c115efff36e792db5d1c9407" Apr 24 16:53:59.333368 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:53:59.333344 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keda-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=keda-operator pod=keda-operator-ffbb595cb-x5bdf_openshift-keda(beef5116-19de-4a87-9cd5-1504e8568da1)\"" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" podUID="beef5116-19de-4a87-9cd5-1504e8568da1" Apr 24 16:54:00.412207 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:00.412172 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:54:00.412551 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:00.412487 2578 scope.go:117] "RemoveContainer" containerID="eaf910d93103faccc08ae169389e564d55eead41c115efff36e792db5d1c9407" Apr 24 16:54:00.412658 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:00.412642 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keda-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=keda-operator pod=keda-operator-ffbb595cb-x5bdf_openshift-keda(beef5116-19de-4a87-9cd5-1504e8568da1)\"" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" podUID="beef5116-19de-4a87-9cd5-1504e8568da1" Apr 24 16:54:00.678052 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:00.677964 2578 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-129-204.ec2.internal\": Get \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/api/v1/nodes/ip-10-0-129-204.ec2.internal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:54:00.829376 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:00.829342 2578 controller.go:195] "Failed to update lease" err="Put \"https://ae673ebd1d2d94a77979cf1bf4f044d4-66d4575ed5e01ae7.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-204.ec2.internal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 16:54:08.298457 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.298414 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cn24q/must-gather-rpxn5"] Apr 24 16:54:08.301821 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.301767 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cn24q/must-gather-rpxn5" Apr 24 16:54:08.302065 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.302042 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cn24q/must-gather-rpxn5"] Apr 24 16:54:08.302476 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:08.302455 2578 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ip-10-0-129-204.ec2.internal\": the object has been modified; please apply your changes to the latest version and try again" Apr 24 16:54:08.304565 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.304544 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-cn24q\"/\"kube-root-ca.crt\"" Apr 24 16:54:08.304685 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.304553 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-cn24q\"/\"openshift-service-ca.crt\"" Apr 24 16:54:08.304685 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.304594 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-cn24q\"/\"default-dockercfg-67hh5\"" Apr 24 16:54:08.365395 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.365373 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6krl\" (UniqueName: \"kubernetes.io/projected/6e0caaa1-70aa-4b22-871f-904052a4e6a8-kube-api-access-d6krl\") pod \"must-gather-rpxn5\" (UID: \"6e0caaa1-70aa-4b22-871f-904052a4e6a8\") " pod="openshift-must-gather-cn24q/must-gather-rpxn5" Apr 24 16:54:08.365525 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.365412 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e0caaa1-70aa-4b22-871f-904052a4e6a8-must-gather-output\") pod \"must-gather-rpxn5\" (UID: \"6e0caaa1-70aa-4b22-871f-904052a4e6a8\") " pod="openshift-must-gather-cn24q/must-gather-rpxn5" Apr 24 16:54:08.466330 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.466303 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d6krl\" (UniqueName: \"kubernetes.io/projected/6e0caaa1-70aa-4b22-871f-904052a4e6a8-kube-api-access-d6krl\") pod \"must-gather-rpxn5\" (UID: \"6e0caaa1-70aa-4b22-871f-904052a4e6a8\") " pod="openshift-must-gather-cn24q/must-gather-rpxn5" Apr 24 16:54:08.466491 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.466339 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e0caaa1-70aa-4b22-871f-904052a4e6a8-must-gather-output\") pod \"must-gather-rpxn5\" (UID: \"6e0caaa1-70aa-4b22-871f-904052a4e6a8\") " pod="openshift-must-gather-cn24q/must-gather-rpxn5" Apr 24 16:54:08.466631 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.466615 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e0caaa1-70aa-4b22-871f-904052a4e6a8-must-gather-output\") pod \"must-gather-rpxn5\" (UID: \"6e0caaa1-70aa-4b22-871f-904052a4e6a8\") " pod="openshift-must-gather-cn24q/must-gather-rpxn5" Apr 24 16:54:08.478785 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.478761 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6krl\" (UniqueName: \"kubernetes.io/projected/6e0caaa1-70aa-4b22-871f-904052a4e6a8-kube-api-access-d6krl\") pod \"must-gather-rpxn5\" (UID: \"6e0caaa1-70aa-4b22-871f-904052a4e6a8\") " pod="openshift-must-gather-cn24q/must-gather-rpxn5" Apr 24 16:54:08.625826 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.625779 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cn24q/must-gather-rpxn5" Apr 24 16:54:08.743064 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:08.743035 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cn24q/must-gather-rpxn5"] Apr 24 16:54:08.746351 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:54:08.746323 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e0caaa1_70aa_4b22_871f_904052a4e6a8.slice/crio-5bc281d79d453b9e5b2f5b25e55580263debd461945d55528b864242dea21502 WatchSource:0}: Error finding container 5bc281d79d453b9e5b2f5b25e55580263debd461945d55528b864242dea21502: Status 404 returned error can't find the container with id 5bc281d79d453b9e5b2f5b25e55580263debd461945d55528b864242dea21502 Apr 24 16:54:09.521236 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:09.521199 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cn24q/must-gather-rpxn5" event={"ID":"6e0caaa1-70aa-4b22-871f-904052a4e6a8","Type":"ContainerStarted","Data":"5bc281d79d453b9e5b2f5b25e55580263debd461945d55528b864242dea21502"} Apr 24 16:54:10.916578 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:10.916551 2578 scope.go:117] "RemoveContainer" containerID="75469d675a65b96215d70e5997bdcdb8e5dc4b92cb98acacca2be91f72f1a5a0" Apr 24 16:54:11.917014 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:11.916983 2578 scope.go:117] "RemoveContainer" containerID="e2807b53ae9de70b3445c8833ef2ab63e9eeda1ed427365aad149a5dded42aaa" Apr 24 16:54:11.917376 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:11.917074 2578 scope.go:117] "RemoveContainer" containerID="eaf910d93103faccc08ae169389e564d55eead41c115efff36e792db5d1c9407" Apr 24 16:54:13.535732 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:13.535551 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" event={"ID":"9caf8bd1-fec7-41b9-a6f4-b88775c03dab","Type":"ContainerStarted","Data":"1a1f2e92ea507fe53f698eec8bebe804895dcde338837ab975e3a45d603843da"} Apr 24 16:54:13.536178 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:13.535825 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:54:13.537855 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:13.537826 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" event={"ID":"5c3037e0-7240-4f20-b277-66b756c6f9f7","Type":"ContainerStarted","Data":"958abd5c90b796f6a4afa7adf43acd92659b2942a9b3f514a7e502d1e6673cce"} Apr 24 16:54:13.538360 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:13.538321 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:54:14.543488 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:14.543453 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" event={"ID":"beef5116-19de-4a87-9cd5-1504e8568da1","Type":"ContainerStarted","Data":"02214bdc99f39e2b95e79ba8b86c4f380410721bac98a7182a37eaebc24ab327"} Apr 24 16:54:14.543981 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:14.543949 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:54:17.554493 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:17.554447 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cn24q/must-gather-rpxn5" event={"ID":"6e0caaa1-70aa-4b22-871f-904052a4e6a8","Type":"ContainerStarted","Data":"54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b"} Apr 24 16:54:17.554493 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:17.554493 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cn24q/must-gather-rpxn5" event={"ID":"6e0caaa1-70aa-4b22-871f-904052a4e6a8","Type":"ContainerStarted","Data":"56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146"} Apr 24 16:54:17.573283 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:17.573237 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cn24q/must-gather-rpxn5" podStartSLOduration=5.596838418 podStartE2EDuration="13.573223225s" podCreationTimestamp="2026-04-24 16:54:04 +0000 UTC" firstStartedPulling="2026-04-24 16:54:08.747974686 +0000 UTC m=+532.344340950" lastFinishedPulling="2026-04-24 16:54:16.724359492 +0000 UTC m=+540.320725757" observedRunningTime="2026-04-24 16:54:17.572360032 +0000 UTC m=+541.168726319" watchObservedRunningTime="2026-04-24 16:54:17.573223225 +0000 UTC m=+541.169589511" Apr 24 16:54:27.584907 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:27.584804 2578 generic.go:358] "Generic (PLEG): container finished" podID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" containerID="56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146" exitCode=0 Apr 24 16:54:27.584907 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:27.584861 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cn24q/must-gather-rpxn5" event={"ID":"6e0caaa1-70aa-4b22-871f-904052a4e6a8","Type":"ContainerDied","Data":"56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146"} Apr 24 16:54:27.585401 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:27.585121 2578 scope.go:117] "RemoveContainer" containerID="56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146" Apr 24 16:54:28.178843 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:28.178802 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cn24q_must-gather-rpxn5_6e0caaa1-70aa-4b22-871f-904052a4e6a8/gather/0.log" Apr 24 16:54:31.446142 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:31.446111 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-62rlm_2fb12526-7d12-4304-a9c9-f8975b13ac2b/global-pull-secret-syncer/0.log" Apr 24 16:54:31.554505 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:31.554479 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-jk7f4_5e89d705-97ba-4bce-a2d2-d806b5547f4f/konnectivity-agent/0.log" Apr 24 16:54:31.631748 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:31.631725 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-129-204.ec2.internal_dd54af8f80e3b05db4203800e6cae347/haproxy/0.log" Apr 24 16:54:33.532875 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.532837 2578 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cn24q/must-gather-rpxn5"] Apr 24 16:54:33.533355 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.533167 2578 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-must-gather-cn24q/must-gather-rpxn5" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" containerName="copy" containerID="cri-o://54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b" gracePeriod=2 Apr 24 16:54:33.535356 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.535324 2578 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cn24q/must-gather-rpxn5"] Apr 24 16:54:33.535478 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.535448 2578 status_manager.go:895] "Failed to get status for pod" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" pod="openshift-must-gather-cn24q/must-gather-rpxn5" err="pods \"must-gather-rpxn5\" is forbidden: User \"system:node:ip-10-0-129-204.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-cn24q\": no relationship found between node 'ip-10-0-129-204.ec2.internal' and this object" Apr 24 16:54:33.757370 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.757350 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cn24q_must-gather-rpxn5_6e0caaa1-70aa-4b22-871f-904052a4e6a8/copy/0.log" Apr 24 16:54:33.757709 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.757694 2578 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cn24q/must-gather-rpxn5" Apr 24 16:54:33.759886 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.759862 2578 status_manager.go:895] "Failed to get status for pod" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" pod="openshift-must-gather-cn24q/must-gather-rpxn5" err="pods \"must-gather-rpxn5\" is forbidden: User \"system:node:ip-10-0-129-204.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-cn24q\": no relationship found between node 'ip-10-0-129-204.ec2.internal' and this object" Apr 24 16:54:33.851180 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.851129 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e0caaa1-70aa-4b22-871f-904052a4e6a8-must-gather-output\") pod \"6e0caaa1-70aa-4b22-871f-904052a4e6a8\" (UID: \"6e0caaa1-70aa-4b22-871f-904052a4e6a8\") " Apr 24 16:54:33.851259 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.851180 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6krl\" (UniqueName: \"kubernetes.io/projected/6e0caaa1-70aa-4b22-871f-904052a4e6a8-kube-api-access-d6krl\") pod \"6e0caaa1-70aa-4b22-871f-904052a4e6a8\" (UID: \"6e0caaa1-70aa-4b22-871f-904052a4e6a8\") " Apr 24 16:54:33.851426 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.851402 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e0caaa1-70aa-4b22-871f-904052a4e6a8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6e0caaa1-70aa-4b22-871f-904052a4e6a8" (UID: "6e0caaa1-70aa-4b22-871f-904052a4e6a8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 16:54:33.853172 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.853148 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0caaa1-70aa-4b22-871f-904052a4e6a8-kube-api-access-d6krl" (OuterVolumeSpecName: "kube-api-access-d6krl") pod "6e0caaa1-70aa-4b22-871f-904052a4e6a8" (UID: "6e0caaa1-70aa-4b22-871f-904052a4e6a8"). InnerVolumeSpecName "kube-api-access-d6krl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 16:54:33.951854 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.951830 2578 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e0caaa1-70aa-4b22-871f-904052a4e6a8-must-gather-output\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:54:33.951854 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:33.951853 2578 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d6krl\" (UniqueName: \"kubernetes.io/projected/6e0caaa1-70aa-4b22-871f-904052a4e6a8-kube-api-access-d6krl\") on node \"ip-10-0-129-204.ec2.internal\" DevicePath \"\"" Apr 24 16:54:34.604613 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.604587 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cn24q_must-gather-rpxn5_6e0caaa1-70aa-4b22-871f-904052a4e6a8/copy/0.log" Apr 24 16:54:34.605010 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.604904 2578 generic.go:358] "Generic (PLEG): container finished" podID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" containerID="54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b" exitCode=143 Apr 24 16:54:34.605010 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.604958 2578 scope.go:117] "RemoveContainer" containerID="54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b" Apr 24 16:54:34.605103 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.604961 2578 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cn24q/must-gather-rpxn5" Apr 24 16:54:34.607213 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.607190 2578 status_manager.go:895] "Failed to get status for pod" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" pod="openshift-must-gather-cn24q/must-gather-rpxn5" err="pods \"must-gather-rpxn5\" is forbidden: User \"system:node:ip-10-0-129-204.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-cn24q\": no relationship found between node 'ip-10-0-129-204.ec2.internal' and this object" Apr 24 16:54:34.612383 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.612369 2578 scope.go:117] "RemoveContainer" containerID="56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146" Apr 24 16:54:34.614715 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.614694 2578 status_manager.go:895] "Failed to get status for pod" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" pod="openshift-must-gather-cn24q/must-gather-rpxn5" err="pods \"must-gather-rpxn5\" is forbidden: User \"system:node:ip-10-0-129-204.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-cn24q\": no relationship found between node 'ip-10-0-129-204.ec2.internal' and this object" Apr 24 16:54:34.623980 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.623962 2578 scope.go:117] "RemoveContainer" containerID="54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b" Apr 24 16:54:34.624212 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:34.624193 2578 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b\": container with ID starting with 54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b not found: ID does not exist" containerID="54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b" Apr 24 16:54:34.624258 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.624219 2578 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b"} err="failed to get container status \"54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b\": rpc error: code = NotFound desc = could not find container \"54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b\": container with ID starting with 54d7893f6e3c45673e300e1e8a9d42900d550a04efceb3d8ff7d260b1167dd8b not found: ID does not exist" Apr 24 16:54:34.624258 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.624248 2578 scope.go:117] "RemoveContainer" containerID="56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146" Apr 24 16:54:34.624441 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:34.624418 2578 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146\": container with ID starting with 56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146 not found: ID does not exist" containerID="56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146" Apr 24 16:54:34.624476 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.624445 2578 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146"} err="failed to get container status \"56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146\": rpc error: code = NotFound desc = could not find container \"56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146\": container with ID starting with 56152e2c00b31f284415716ee42f77b752fdf4a48c7310390f52f3d062fe0146 not found: ID does not exist" Apr 24 16:54:34.920847 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:34.920803 2578 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" path="/var/lib/kubelet/pods/6e0caaa1-70aa-4b22-871f-904052a4e6a8/volumes" Apr 24 16:54:35.347298 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:35.347241 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-flc22_f25169f2-3731-4f98-a3ff-cea42487c5e1/node-exporter/0.log" Apr 24 16:54:35.367000 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:35.366976 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-flc22_f25169f2-3731-4f98-a3ff-cea42487c5e1/kube-rbac-proxy/0.log" Apr 24 16:54:35.389977 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:35.389959 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-flc22_f25169f2-3731-4f98-a3ff-cea42487c5e1/init-textfile/0.log" Apr 24 16:54:35.548868 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:35.548849 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-x5bdf" Apr 24 16:54:38.592631 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.592602 2578 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw"] Apr 24 16:54:38.593001 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.592884 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" containerName="copy" Apr 24 16:54:38.593001 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.592896 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" containerName="copy" Apr 24 16:54:38.593001 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.592910 2578 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" containerName="gather" Apr 24 16:54:38.593001 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.592915 2578 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" containerName="gather" Apr 24 16:54:38.593001 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.592955 2578 memory_manager.go:356] "RemoveStaleState removing state" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" containerName="gather" Apr 24 16:54:38.593001 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.592965 2578 memory_manager.go:356] "RemoveStaleState removing state" podUID="6e0caaa1-70aa-4b22-871f-904052a4e6a8" containerName="copy" Apr 24 16:54:38.597918 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.597898 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.600094 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.600075 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wz7wm\"/\"kube-root-ca.crt\"" Apr 24 16:54:38.601231 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.601212 2578 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wz7wm\"/\"openshift-service-ca.crt\"" Apr 24 16:54:38.601231 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.601221 2578 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-wz7wm\"/\"default-dockercfg-wj4t8\"" Apr 24 16:54:38.605051 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.605025 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw"] Apr 24 16:54:38.685478 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.685456 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-podres\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.685574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.685490 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-lib-modules\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.685574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.685511 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxpmg\" (UniqueName: \"kubernetes.io/projected/5f934e8f-7faf-47e1-afaa-1794d9af1206-kube-api-access-dxpmg\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.685574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.685540 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-sys\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.685574 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.685560 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-proc\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.786705 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.786675 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-sys\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.786786 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.786708 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-proc\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.786786 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.786750 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-podres\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.786898 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.786790 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-lib-modules\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.786898 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.786799 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-sys\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.786898 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.786798 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-proc\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.786898 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.786839 2578 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dxpmg\" (UniqueName: \"kubernetes.io/projected/5f934e8f-7faf-47e1-afaa-1794d9af1206-kube-api-access-dxpmg\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.787025 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.786897 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-podres\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.787025 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.786920 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f934e8f-7faf-47e1-afaa-1794d9af1206-lib-modules\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.795519 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.795496 2578 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxpmg\" (UniqueName: \"kubernetes.io/projected/5f934e8f-7faf-47e1-afaa-1794d9af1206-kube-api-access-dxpmg\") pod \"perf-node-gather-daemonset-gdgjw\" (UID: \"5f934e8f-7faf-47e1-afaa-1794d9af1206\") " pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:38.908124 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:38.908105 2578 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:39.025021 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:39.024991 2578 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw"] Apr 24 16:54:39.027465 ip-10-0-129-204 kubenswrapper[2578]: W0424 16:54:39.027441 2578 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5f934e8f_7faf_47e1_afaa_1794d9af1206.slice/crio-5ff2f877e54e12a396be37d53c9db190851cbf659c0ccfef0b3e915cef57abac WatchSource:0}: Error finding container 5ff2f877e54e12a396be37d53c9db190851cbf659c0ccfef0b3e915cef57abac: Status 404 returned error can't find the container with id 5ff2f877e54e12a396be37d53c9db190851cbf659c0ccfef0b3e915cef57abac Apr 24 16:54:39.141194 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:39.141171 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-dj4h8_e0b4ca8b-4a38-48a0-a607-1d9984f02dd3/dns/0.log" Apr 24 16:54:39.161034 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:39.160994 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-dj4h8_e0b4ca8b-4a38-48a0-a607-1d9984f02dd3/kube-rbac-proxy/0.log" Apr 24 16:54:39.252010 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:39.251987 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-7v56j_7c48d729-e644-4376-b836-4a516c44c4d6/dns-node-resolver/0.log" Apr 24 16:54:39.622933 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:39.622897 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" event={"ID":"5f934e8f-7faf-47e1-afaa-1794d9af1206","Type":"ContainerStarted","Data":"f022eae0939a2d7fb1c1d23b3daf9f2f85d6d9f3a3879906f5663afc45fdec69"} Apr 24 16:54:39.622933 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:39.622933 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" event={"ID":"5f934e8f-7faf-47e1-afaa-1794d9af1206","Type":"ContainerStarted","Data":"5ff2f877e54e12a396be37d53c9db190851cbf659c0ccfef0b3e915cef57abac"} Apr 24 16:54:39.623330 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:39.622958 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:44.546005 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:44.545954 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-qqqjg" Apr 24 16:54:44.549278 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:44.549254 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:54:45.637032 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:45.637001 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" Apr 24 16:54:49.544031 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:49.543989 2578 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" podUID="5c3037e0-7240-4f20-b277-66b756c6f9f7" containerName="manager" probeResult="failure" output="Get \"http://10.134.0.17:8081/readyz\": dial tcp 10.134.0.17:8081: connect: connection refused" Apr 24 16:54:49.652469 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:49.652442 2578 generic.go:358] "Generic (PLEG): container finished" podID="86ead66c-d1c6-4b04-858c-9738a6b251b7" containerID="a20c0dae1f6fc44bc07ca2ee6c57849e2e8337647b995b8833b0301e42c7b9f7" exitCode=1 Apr 24 16:54:49.652573 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:49.652511 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" event={"ID":"86ead66c-d1c6-4b04-858c-9738a6b251b7","Type":"ContainerDied","Data":"a20c0dae1f6fc44bc07ca2ee6c57849e2e8337647b995b8833b0301e42c7b9f7"} Apr 24 16:54:49.652573 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:49.652551 2578 scope.go:117] "RemoveContainer" containerID="6843efd3309f3b00b3ca95e4e96ea33730eaa77bd023696b5154639d5c2353cc" Apr 24 16:54:49.652893 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:49.652861 2578 scope.go:117] "RemoveContainer" containerID="a20c0dae1f6fc44bc07ca2ee6c57849e2e8337647b995b8833b0301e42c7b9f7" Apr 24 16:54:49.653078 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:49.653054 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"custom-metrics-autoscaler-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=custom-metrics-autoscaler-operator pod=custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj_openshift-keda(86ead66c-d1c6-4b04-858c-9738a6b251b7)\"" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" podUID="86ead66c-d1c6-4b04-858c-9738a6b251b7" Apr 24 16:54:49.654232 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:49.654214 2578 generic.go:358] "Generic (PLEG): container finished" podID="5c3037e0-7240-4f20-b277-66b756c6f9f7" containerID="958abd5c90b796f6a4afa7adf43acd92659b2942a9b3f514a7e502d1e6673cce" exitCode=1 Apr 24 16:54:49.654319 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:49.654274 2578 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" event={"ID":"5c3037e0-7240-4f20-b277-66b756c6f9f7","Type":"ContainerDied","Data":"958abd5c90b796f6a4afa7adf43acd92659b2942a9b3f514a7e502d1e6673cce"} Apr 24 16:54:49.654520 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:49.654508 2578 scope.go:117] "RemoveContainer" containerID="958abd5c90b796f6a4afa7adf43acd92659b2942a9b3f514a7e502d1e6673cce" Apr 24 16:54:49.654658 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:49.654643 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=kserve-controller-manager-7f7fb4c66f-q6r6g_kserve(5c3037e0-7240-4f20-b277-66b756c6f9f7)\"" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" podUID="5c3037e0-7240-4f20-b277-66b756c6f9f7" Apr 24 16:54:49.662452 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:49.662437 2578 scope.go:117] "RemoveContainer" containerID="e2807b53ae9de70b3445c8833ef2ab63e9eeda1ed427365aad149a5dded42aaa" Apr 24 16:54:52.830042 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:52.830011 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:54:52.830444 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:52.830338 2578 scope.go:117] "RemoveContainer" containerID="a20c0dae1f6fc44bc07ca2ee6c57849e2e8337647b995b8833b0301e42c7b9f7" Apr 24 16:54:52.830574 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:52.830553 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"custom-metrics-autoscaler-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=custom-metrics-autoscaler-operator pod=custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj_openshift-keda(86ead66c-d1c6-4b04-858c-9738a6b251b7)\"" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" podUID="86ead66c-d1c6-4b04-858c-9738a6b251b7" Apr 24 16:54:52.853049 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:52.853000 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wz7wm/perf-node-gather-daemonset-gdgjw" podStartSLOduration=14.852986357 podStartE2EDuration="14.852986357s" podCreationTimestamp="2026-04-24 16:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 16:54:52.85152111 +0000 UTC m=+576.447887396" watchObservedRunningTime="2026-04-24 16:54:52.852986357 +0000 UTC m=+576.449352681" Apr 24 16:54:53.031323 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:53.031290 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-lhmfh_9b309f61-8972-4f0c-b7e8-cfcea2909bf3/node-ca/0.log" Apr 24 16:54:54.377347 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:54.377318 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-xcsf7_2eb66152-aaca-4639-9b66-5bfa5656f3c4/serve-healthcheck-canary/0.log" Apr 24 16:54:54.544082 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:54.544054 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:54:54.544471 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:54.544438 2578 scope.go:117] "RemoveContainer" containerID="958abd5c90b796f6a4afa7adf43acd92659b2942a9b3f514a7e502d1e6673cce" Apr 24 16:54:54.544660 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:54.544641 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=kserve-controller-manager-7f7fb4c66f-q6r6g_kserve(5c3037e0-7240-4f20-b277-66b756c6f9f7)\"" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" podUID="5c3037e0-7240-4f20-b277-66b756c6f9f7" Apr 24 16:54:55.027717 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:55.027697 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-hbhwf_bf454f3d-bcaf-4816-b706-91aac8d5a4c1/kube-rbac-proxy/0.log" Apr 24 16:54:55.051488 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:55.051470 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-hbhwf_bf454f3d-bcaf-4816-b706-91aac8d5a4c1/exporter/0.log" Apr 24 16:54:55.077335 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:55.077310 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-hbhwf_bf454f3d-bcaf-4816-b706-91aac8d5a4c1/extractor/0.log" Apr 24 16:54:57.042475 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:57.042449 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_kserve-controller-manager-7f7fb4c66f-q6r6g_5c3037e0-7240-4f20-b277-66b756c6f9f7/manager/2.log" Apr 24 16:54:57.042475 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:57.042463 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_kserve-controller-manager-7f7fb4c66f-q6r6g_5c3037e0-7240-4f20-b277-66b756c6f9f7/manager/2.log" Apr 24 16:54:57.062354 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:57.062329 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_llmisvc-controller-manager-68cc5db7c4-qqqjg_9caf8bd1-fec7-41b9-a6f4-b88775c03dab/manager/2.log" Apr 24 16:54:57.062494 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:57.062443 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_llmisvc-controller-manager-68cc5db7c4-qqqjg_9caf8bd1-fec7-41b9-a6f4-b88775c03dab/manager/1.log" Apr 24 16:54:57.081952 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:57.081930 2578 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-86cc847c5c-5ht5z_935a464b-8bb1-491c-871d-704a4406c97b/seaweedfs/0.log" Apr 24 16:54:57.417697 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:57.417673 2578 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" Apr 24 16:54:57.418042 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:57.418028 2578 scope.go:117] "RemoveContainer" containerID="a20c0dae1f6fc44bc07ca2ee6c57849e2e8337647b995b8833b0301e42c7b9f7" Apr 24 16:54:57.418217 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:57.418200 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"custom-metrics-autoscaler-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=custom-metrics-autoscaler-operator pod=custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj_openshift-keda(86ead66c-d1c6-4b04-858c-9738a6b251b7)\"" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-vwlsj" podUID="86ead66c-d1c6-4b04-858c-9738a6b251b7" Apr 24 16:54:58.780892 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:58.780862 2578 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" Apr 24 16:54:58.781355 ip-10-0-129-204 kubenswrapper[2578]: I0424 16:54:58.781201 2578 scope.go:117] "RemoveContainer" containerID="958abd5c90b796f6a4afa7adf43acd92659b2942a9b3f514a7e502d1e6673cce" Apr 24 16:54:58.781355 ip-10-0-129-204 kubenswrapper[2578]: E0424 16:54:58.781348 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=kserve-controller-manager-7f7fb4c66f-q6r6g_kserve(5c3037e0-7240-4f20-b277-66b756c6f9f7)\"" pod="kserve/kserve-controller-manager-7f7fb4c66f-q6r6g" podUID="5c3037e0-7240-4f20-b277-66b756c6f9f7"