Apr 23 17:52:23.936706 ip-10-0-141-209 systemd[1]: Starting Kubernetes Kubelet... Apr 23 17:52:24.413357 ip-10-0-141-209 kubenswrapper[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:24.413357 ip-10-0-141-209 kubenswrapper[2570]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 23 17:52:24.413357 ip-10-0-141-209 kubenswrapper[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:24.413357 ip-10-0-141-209 kubenswrapper[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 17:52:24.413357 ip-10-0-141-209 kubenswrapper[2570]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:24.416501 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.416437 2570 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 17:52:24.424723 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424703 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:24.424723 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424719 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:24.424723 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424724 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:24.424723 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424727 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424730 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424734 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424737 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424741 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424743 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424746 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424749 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424752 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424755 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424758 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424761 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424763 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424766 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424768 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424771 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424774 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424776 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424779 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424781 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:24.424879 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424784 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424787 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424791 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424798 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424801 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424804 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424807 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424809 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424811 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424814 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424817 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424820 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424822 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424825 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424828 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424831 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424835 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424837 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424840 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:24.425338 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424843 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424846 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424848 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424851 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424855 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424857 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424859 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424862 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424865 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424867 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424870 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424873 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424875 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424878 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424881 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424884 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424886 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424889 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424894 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:24.425836 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424897 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424900 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424903 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424906 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424908 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424911 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424913 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424916 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424924 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424927 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424930 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424932 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424935 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424937 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424940 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424943 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424948 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424954 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424957 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424960 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:24.426294 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424963 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424965 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424968 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424970 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.424973 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425337 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425341 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425344 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425347 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425350 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425354 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425356 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425359 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425362 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425365 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425368 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425370 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425373 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425377 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425381 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:24.426783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425385 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425388 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425391 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425394 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425396 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425399 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425402 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425405 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425407 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425410 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425431 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425434 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425437 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425440 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425442 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425445 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425447 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425450 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425453 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425455 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:24.427263 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425458 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425461 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425463 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425466 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425469 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425471 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425475 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425477 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425480 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425482 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425485 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425487 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425492 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425494 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425497 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425500 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425502 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425505 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425508 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425510 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:24.427783 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425513 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425515 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425518 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425521 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425523 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425526 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425528 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425531 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425533 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425536 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425538 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425541 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425544 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425546 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425550 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425554 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425556 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425559 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425562 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425564 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:24.428278 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425567 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425569 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425572 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425576 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425579 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425582 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425584 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425587 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425590 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425592 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.425595 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426304 2570 flags.go:64] FLAG: --address="0.0.0.0" Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426314 2570 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426319 2570 flags.go:64] FLAG: --anonymous-auth="true" Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426324 2570 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426329 2570 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426332 2570 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426336 2570 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426341 2570 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426344 2570 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426347 2570 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 23 17:52:24.428798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426350 2570 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426354 2570 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426357 2570 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426360 2570 flags.go:64] FLAG: --cgroup-root="" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426363 2570 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426366 2570 flags.go:64] FLAG: --client-ca-file="" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426369 2570 flags.go:64] FLAG: --cloud-config="" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426372 2570 flags.go:64] FLAG: --cloud-provider="external" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426375 2570 flags.go:64] FLAG: --cluster-dns="[]" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426381 2570 flags.go:64] FLAG: --cluster-domain="" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426383 2570 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426387 2570 flags.go:64] FLAG: --config-dir="" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426390 2570 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426394 2570 flags.go:64] FLAG: --container-log-max-files="5" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426398 2570 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426402 2570 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426405 2570 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426408 2570 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426426 2570 flags.go:64] FLAG: --contention-profiling="false" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426431 2570 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426434 2570 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426437 2570 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426441 2570 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426445 2570 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426448 2570 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 23 17:52:24.429314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426451 2570 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426454 2570 flags.go:64] FLAG: --enable-load-reader="false" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426458 2570 flags.go:64] FLAG: --enable-server="true" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426461 2570 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426465 2570 flags.go:64] FLAG: --event-burst="100" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426469 2570 flags.go:64] FLAG: --event-qps="50" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426472 2570 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426475 2570 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426478 2570 flags.go:64] FLAG: --eviction-hard="" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426491 2570 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426494 2570 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426497 2570 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426500 2570 flags.go:64] FLAG: --eviction-soft="" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426503 2570 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426506 2570 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426509 2570 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426512 2570 flags.go:64] FLAG: --experimental-mounter-path="" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426515 2570 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426518 2570 flags.go:64] FLAG: --fail-swap-on="true" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426521 2570 flags.go:64] FLAG: --feature-gates="" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426525 2570 flags.go:64] FLAG: --file-check-frequency="20s" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426528 2570 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426531 2570 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426535 2570 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426538 2570 flags.go:64] FLAG: --healthz-port="10248" Apr 23 17:52:24.429966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426541 2570 flags.go:64] FLAG: --help="false" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426544 2570 flags.go:64] FLAG: --hostname-override="ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426547 2570 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426550 2570 flags.go:64] FLAG: --http-check-frequency="20s" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426553 2570 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426557 2570 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426560 2570 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426563 2570 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426566 2570 flags.go:64] FLAG: --image-service-endpoint="" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426569 2570 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426572 2570 flags.go:64] FLAG: --kube-api-burst="100" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426576 2570 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426579 2570 flags.go:64] FLAG: --kube-api-qps="50" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426582 2570 flags.go:64] FLAG: --kube-reserved="" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426585 2570 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426588 2570 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426591 2570 flags.go:64] FLAG: --kubelet-cgroups="" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426593 2570 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426596 2570 flags.go:64] FLAG: --lock-file="" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426599 2570 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426602 2570 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426605 2570 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426610 2570 flags.go:64] FLAG: --log-json-split-stream="false" Apr 23 17:52:24.430565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426613 2570 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426616 2570 flags.go:64] FLAG: --log-text-split-stream="false" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426620 2570 flags.go:64] FLAG: --logging-format="text" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426623 2570 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426627 2570 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426630 2570 flags.go:64] FLAG: --manifest-url="" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426633 2570 flags.go:64] FLAG: --manifest-url-header="" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426637 2570 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426640 2570 flags.go:64] FLAG: --max-open-files="1000000" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426644 2570 flags.go:64] FLAG: --max-pods="110" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426647 2570 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426650 2570 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426653 2570 flags.go:64] FLAG: --memory-manager-policy="None" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426656 2570 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426660 2570 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426663 2570 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426666 2570 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426673 2570 flags.go:64] FLAG: --node-status-max-images="50" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426676 2570 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426679 2570 flags.go:64] FLAG: --oom-score-adj="-999" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426683 2570 flags.go:64] FLAG: --pod-cidr="" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426686 2570 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426692 2570 flags.go:64] FLAG: --pod-manifest-path="" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426695 2570 flags.go:64] FLAG: --pod-max-pids="-1" Apr 23 17:52:24.431163 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426698 2570 flags.go:64] FLAG: --pods-per-core="0" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426701 2570 flags.go:64] FLAG: --port="10250" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426705 2570 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426707 2570 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0b9cfb2cf58393b39" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426710 2570 flags.go:64] FLAG: --qos-reserved="" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426714 2570 flags.go:64] FLAG: --read-only-port="10255" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426716 2570 flags.go:64] FLAG: --register-node="true" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426719 2570 flags.go:64] FLAG: --register-schedulable="true" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426722 2570 flags.go:64] FLAG: --register-with-taints="" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426726 2570 flags.go:64] FLAG: --registry-burst="10" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426729 2570 flags.go:64] FLAG: --registry-qps="5" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426734 2570 flags.go:64] FLAG: --reserved-cpus="" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426737 2570 flags.go:64] FLAG: --reserved-memory="" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426740 2570 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426744 2570 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426747 2570 flags.go:64] FLAG: --rotate-certificates="false" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426750 2570 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426752 2570 flags.go:64] FLAG: --runonce="false" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426755 2570 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426758 2570 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426761 2570 flags.go:64] FLAG: --seccomp-default="false" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426764 2570 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426767 2570 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426770 2570 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426774 2570 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426777 2570 flags.go:64] FLAG: --storage-driver-password="root" Apr 23 17:52:24.431771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426780 2570 flags.go:64] FLAG: --storage-driver-secure="false" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426783 2570 flags.go:64] FLAG: --storage-driver-table="stats" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426786 2570 flags.go:64] FLAG: --storage-driver-user="root" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426790 2570 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426794 2570 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426797 2570 flags.go:64] FLAG: --system-cgroups="" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426800 2570 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426805 2570 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426808 2570 flags.go:64] FLAG: --tls-cert-file="" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426811 2570 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426815 2570 flags.go:64] FLAG: --tls-min-version="" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426818 2570 flags.go:64] FLAG: --tls-private-key-file="" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426821 2570 flags.go:64] FLAG: --topology-manager-policy="none" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426824 2570 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426827 2570 flags.go:64] FLAG: --topology-manager-scope="container" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426830 2570 flags.go:64] FLAG: --v="2" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426834 2570 flags.go:64] FLAG: --version="false" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426839 2570 flags.go:64] FLAG: --vmodule="" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426843 2570 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.426847 2570 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426946 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426950 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426953 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426956 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:24.432441 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426959 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426962 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426972 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426976 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426979 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426982 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426985 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426988 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426990 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426993 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426995 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.426998 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427009 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427012 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427015 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427017 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427020 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427022 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427025 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427027 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:24.433015 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427030 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427032 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427035 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427038 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427040 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427045 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427048 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427050 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427053 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427056 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427060 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427062 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427065 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427067 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427070 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427073 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427078 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427081 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427084 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:24.433558 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427086 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427089 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427091 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427094 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427097 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427100 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427103 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427105 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427108 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427110 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427113 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427115 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427117 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427120 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427124 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427128 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427131 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427134 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427138 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:24.434062 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427141 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427143 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427146 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427148 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427151 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427154 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427156 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427159 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427162 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427165 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427169 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427171 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427174 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427176 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427179 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427181 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427184 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427186 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427189 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427191 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:24.434552 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427194 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:24.435052 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427197 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:24.435052 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427199 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:24.435052 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.427202 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:24.435052 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.428231 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:24.435052 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.434996 2570 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 23 17:52:24.435052 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.435011 2570 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435057 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435062 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435066 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435069 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435072 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435075 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435077 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435080 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435083 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435086 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435089 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435092 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435094 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435097 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435100 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435103 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435106 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435108 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435111 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:24.435213 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435114 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435117 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435120 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435122 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435125 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435127 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435130 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435132 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435136 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435139 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435141 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435144 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435147 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435150 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435152 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435155 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435158 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435160 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435163 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435166 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:24.435721 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435168 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435171 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435174 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435176 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435179 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435181 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435184 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435186 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435189 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435192 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435194 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435197 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435200 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435203 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435206 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435208 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435211 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435214 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435216 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:24.436226 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435219 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435223 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435226 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435228 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435231 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435234 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435237 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435241 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435245 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435247 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435250 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435253 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435255 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435258 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435261 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435263 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435266 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435269 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435271 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435274 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:24.436875 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435276 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435279 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435283 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435287 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435291 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435294 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435297 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435300 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.435305 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435395 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435400 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435402 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435405 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435408 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435423 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:24.437363 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435426 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435429 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435432 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435435 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435438 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435441 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435444 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435446 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435449 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435452 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435454 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435457 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435460 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435462 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435465 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435469 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435473 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435476 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435479 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:24.437752 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435482 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435485 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435488 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435491 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435494 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435497 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435500 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435503 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435506 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435509 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435512 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435514 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435517 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435520 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435523 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435525 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435528 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435530 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435533 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435536 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:24.438210 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435538 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435541 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435543 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435546 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435549 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435551 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435554 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435556 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435558 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435561 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435564 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435566 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435569 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435571 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435574 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435577 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435579 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435582 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435584 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:24.438709 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435587 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435590 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435592 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435595 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435598 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435601 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435604 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435606 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435609 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435611 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435615 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435618 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435621 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435624 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435626 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435629 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435632 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435634 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435637 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435639 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:24.439162 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435642 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:24.439664 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:24.435645 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:24.439664 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.435650 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:24.439664 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.436364 2570 server.go:962] "Client rotation is on, will bootstrap in background" Apr 23 17:52:24.441841 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.441828 2570 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 23 17:52:24.442932 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.442921 2570 server.go:1019] "Starting client certificate rotation" Apr 23 17:52:24.443029 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.443012 2570 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:24.443065 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.443053 2570 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:24.468325 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.468309 2570 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:24.470189 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.470164 2570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:24.482409 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.482393 2570 log.go:25] "Validated CRI v1 runtime API" Apr 23 17:52:24.487840 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.487825 2570 log.go:25] "Validated CRI v1 image API" Apr 23 17:52:24.489564 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.489551 2570 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 17:52:24.495264 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.495243 2570 fs.go:135] Filesystem UUIDs: map[5345856c-cd1f-4b8a-a006-030455a324a7:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2 eb75dd06-01a0-4122-a63c-16397c5b119c:/dev/nvme0n1p4] Apr 23 17:52:24.495326 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.495263 2570 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 23 17:52:24.495951 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.495932 2570 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:52:24.500606 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.500402 2570 manager.go:217] Machine: {Timestamp:2026-04-23 17:52:24.499262772 +0000 UTC m=+0.439276905 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3101097 MemoryCapacity:33164496896 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec281247cabaca4682fae51f6d77bb93 SystemUUID:ec281247-caba-ca46-82fa-e51f6d77bb93 BootID:9d6a12c3-1106-44de-be77-d14775250d04 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632902656 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582250496 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:75:65:b7:2c:b7 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:75:65:b7:2c:b7 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:a6:fe:de:a1:cf:2e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164496896 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 23 17:52:24.500606 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.500598 2570 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 23 17:52:24.500724 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.500689 2570 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 23 17:52:24.502243 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.502219 2570 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 17:52:24.502383 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.502250 2570 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-141-209.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 17:52:24.502441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.502398 2570 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 17:52:24.502441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.502408 2570 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 17:52:24.502441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.502439 2570 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:24.503803 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.503793 2570 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:24.505007 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.504997 2570 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:24.505118 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.505110 2570 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 23 17:52:24.507183 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.507173 2570 kubelet.go:491] "Attempting to sync node with API server" Apr 23 17:52:24.507216 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.507192 2570 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 17:52:24.507216 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.507207 2570 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 23 17:52:24.507216 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.507216 2570 kubelet.go:397] "Adding apiserver pod source" Apr 23 17:52:24.507338 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.507224 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 17:52:24.508606 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.508590 2570 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:24.508606 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.508608 2570 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:24.511839 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.511825 2570 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 23 17:52:24.513020 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.513007 2570 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 17:52:24.515062 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515047 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 23 17:52:24.515062 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515064 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 23 17:52:24.515155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515070 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 23 17:52:24.515155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515076 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 23 17:52:24.515155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515082 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 23 17:52:24.515155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515097 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 23 17:52:24.515155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515103 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 23 17:52:24.515155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515109 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 23 17:52:24.515155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515115 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 23 17:52:24.515155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515121 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 23 17:52:24.515155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515132 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 23 17:52:24.515155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515142 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 23 17:52:24.515954 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515945 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 23 17:52:24.515954 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.515953 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 23 17:52:24.519250 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.519235 2570 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 17:52:24.519308 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.519288 2570 server.go:1295] "Started kubelet" Apr 23 17:52:24.519401 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.519375 2570 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 17:52:24.519501 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.519443 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 17:52:24.519572 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.519521 2570 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 23 17:52:24.520087 ip-10-0-141-209 systemd[1]: Started Kubernetes Kubelet. Apr 23 17:52:24.520232 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.520213 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-141-209.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:24.520375 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.520348 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-141-209.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:24.520616 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.520564 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:24.520649 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.520632 2570 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 17:52:24.520859 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.520846 2570 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-gc4nb" Apr 23 17:52:24.521326 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.521314 2570 server.go:317] "Adding debug handlers to kubelet server" Apr 23 17:52:24.526595 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.526574 2570 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-gc4nb" Apr 23 17:52:24.528275 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.526620 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-141-209.ec2.internal.18a90dd6e0c08390 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-141-209.ec2.internal,UID:ip-10-0-141-209.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-141-209.ec2.internal,},FirstTimestamp:2026-04-23 17:52:24.519246736 +0000 UTC m=+0.459260867,LastTimestamp:2026-04-23 17:52:24.519246736 +0000 UTC m=+0.459260867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-141-209.ec2.internal,}" Apr 23 17:52:24.529398 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.529382 2570 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 23 17:52:24.530744 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.530722 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 17:52:24.531684 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.531662 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:24.532589 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.532556 2570 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 17:52:24.533247 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.533221 2570 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 23 17:52:24.533805 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.533253 2570 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 17:52:24.533921 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.533863 2570 factory.go:55] Registering systemd factory Apr 23 17:52:24.533986 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.533937 2570 factory.go:223] Registration of the systemd container factory successfully Apr 23 17:52:24.534044 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.534028 2570 reconstruct.go:97] "Volume reconstruction finished" Apr 23 17:52:24.534044 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.534039 2570 reconciler.go:26] "Reconciler: start to sync state" Apr 23 17:52:24.534161 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.534143 2570 factory.go:153] Registering CRI-O factory Apr 23 17:52:24.534248 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.534188 2570 factory.go:223] Registration of the crio container factory successfully Apr 23 17:52:24.534248 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.534242 2570 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 23 17:52:24.534354 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.534265 2570 factory.go:103] Registering Raw factory Apr 23 17:52:24.534354 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.534280 2570 manager.go:1196] Started watching for new ooms in manager Apr 23 17:52:24.535121 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.535107 2570 manager.go:319] Starting recovery of all containers Apr 23 17:52:24.535952 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.535930 2570 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 23 17:52:24.543007 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.542988 2570 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:52:24.545018 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.544998 2570 manager.go:324] Recovery completed Apr 23 17:52:24.547155 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.547140 2570 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-141-209.ec2.internal\" not found" node="ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.549136 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.549124 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:24.551665 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.551648 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:24.551727 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.551676 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:24.551727 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.551687 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:24.552117 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.552105 2570 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 23 17:52:24.552117 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.552115 2570 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 23 17:52:24.552197 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.552131 2570 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:24.555557 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.555545 2570 policy_none.go:49] "None policy: Start" Apr 23 17:52:24.555605 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.555564 2570 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 17:52:24.555605 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.555574 2570 state_mem.go:35] "Initializing new in-memory state store" Apr 23 17:52:24.601963 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.601943 2570 manager.go:341] "Starting Device Plugin manager" Apr 23 17:52:24.616269 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.602006 2570 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 17:52:24.616269 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.602020 2570 server.go:85] "Starting device plugin registration server" Apr 23 17:52:24.616269 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.602629 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 17:52:24.616269 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.602646 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 17:52:24.616269 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.602798 2570 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 23 17:52:24.616269 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.602871 2570 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 23 17:52:24.616269 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.602900 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 17:52:24.616269 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.603554 2570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 23 17:52:24.616269 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.603636 2570 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:24.670629 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.670577 2570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 17:52:24.671699 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.671683 2570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 17:52:24.671764 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.671714 2570 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 17:52:24.671764 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.671731 2570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 17:52:24.671764 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.671740 2570 kubelet.go:2451] "Starting kubelet main sync loop" Apr 23 17:52:24.671896 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.671776 2570 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 23 17:52:24.675253 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.675232 2570 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:52:24.703400 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.703381 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:24.704296 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.704282 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:24.704350 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.704303 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:24.704350 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.704313 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:24.704350 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.704331 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.710853 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.710840 2570 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.710900 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.710857 2570 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-141-209.ec2.internal\": node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:24.723074 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.723054 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:24.772441 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.772395 2570 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal"] Apr 23 17:52:24.772523 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.772464 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:24.773642 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.773628 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:24.773691 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.773653 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:24.773691 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.773663 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:24.775798 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.775786 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:24.775929 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.775915 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.775966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.775941 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:24.776460 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.776442 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:24.776538 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.776464 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:24.776538 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.776473 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:24.776538 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.776450 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:24.776538 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.776504 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:24.776538 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.776519 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:24.778664 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.778650 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.778729 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.778673 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:24.779284 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.779272 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:24.779346 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.779295 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:24.779346 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.779307 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:24.791903 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.791879 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-141-209.ec2.internal\" not found" node="ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.795721 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.795705 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-141-209.ec2.internal\" not found" node="ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.823785 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.823764 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:24.836259 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.836237 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7d4522ab95c28bfad158f8d5f296881d-config\") pod \"kube-apiserver-proxy-ip-10-0-141-209.ec2.internal\" (UID: \"7d4522ab95c28bfad158f8d5f296881d\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.836348 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.836262 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7e7d0ba399fc779164fa43f2003d1693-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal\" (UID: \"7e7d0ba399fc779164fa43f2003d1693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.836348 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.836278 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e7d0ba399fc779164fa43f2003d1693-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal\" (UID: \"7e7d0ba399fc779164fa43f2003d1693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.924762 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:24.924719 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:24.937117 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.937093 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e7d0ba399fc779164fa43f2003d1693-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal\" (UID: \"7e7d0ba399fc779164fa43f2003d1693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.937208 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.937123 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7d4522ab95c28bfad158f8d5f296881d-config\") pod \"kube-apiserver-proxy-ip-10-0-141-209.ec2.internal\" (UID: \"7d4522ab95c28bfad158f8d5f296881d\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.937208 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.937140 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7e7d0ba399fc779164fa43f2003d1693-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal\" (UID: \"7e7d0ba399fc779164fa43f2003d1693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.937208 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.937185 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e7d0ba399fc779164fa43f2003d1693-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal\" (UID: \"7e7d0ba399fc779164fa43f2003d1693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.937318 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.937178 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7e7d0ba399fc779164fa43f2003d1693-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal\" (UID: \"7e7d0ba399fc779164fa43f2003d1693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" Apr 23 17:52:24.937318 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:24.937190 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7d4522ab95c28bfad158f8d5f296881d-config\") pod \"kube-apiserver-proxy-ip-10-0-141-209.ec2.internal\" (UID: \"7d4522ab95c28bfad158f8d5f296881d\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal" Apr 23 17:52:25.025491 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:25.025465 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:25.093982 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.093961 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" Apr 23 17:52:25.097896 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.097868 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal" Apr 23 17:52:25.126492 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:25.126471 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:25.227077 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:25.227022 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:25.327651 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:25.327630 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:25.428207 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:25.428180 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:25.442750 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.442732 2570 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 23 17:52:25.442908 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.442891 2570 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 23 17:52:25.442952 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.442924 2570 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 23 17:52:25.529322 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:25.529298 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-209.ec2.internal\" not found" Apr 23 17:52:25.529450 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.529334 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-22 17:47:24 +0000 UTC" deadline="2027-09-29 18:19:14.359481872 +0000 UTC" Apr 23 17:52:25.529450 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.529375 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="12576h26m48.830111052s" Apr 23 17:52:25.529527 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.529479 2570 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 23 17:52:25.537360 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.537342 2570 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:52:25.541742 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.541719 2570 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:52:25.575507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.575485 2570 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-hhgtr" Apr 23 17:52:25.581706 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.581687 2570 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-hhgtr" Apr 23 17:52:25.623287 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.623271 2570 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:52:25.626970 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:25.626949 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d4522ab95c28bfad158f8d5f296881d.slice/crio-0b6321b2a2c518e741ca8f7915141d173a9aa28dd5a42aed46f1d18dcaa5cef6 WatchSource:0}: Error finding container 0b6321b2a2c518e741ca8f7915141d173a9aa28dd5a42aed46f1d18dcaa5cef6: Status 404 returned error can't find the container with id 0b6321b2a2c518e741ca8f7915141d173a9aa28dd5a42aed46f1d18dcaa5cef6 Apr 23 17:52:25.627426 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:25.627394 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e7d0ba399fc779164fa43f2003d1693.slice/crio-b8e1726ef22058a51e41b36241acdf231124c5586245fe58197528f489e14d2a WatchSource:0}: Error finding container b8e1726ef22058a51e41b36241acdf231124c5586245fe58197528f489e14d2a: Status 404 returned error can't find the container with id b8e1726ef22058a51e41b36241acdf231124c5586245fe58197528f489e14d2a Apr 23 17:52:25.630945 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.630926 2570 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" Apr 23 17:52:25.631649 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.631635 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:52:25.643717 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.643698 2570 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:52:25.644786 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.644774 2570 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal" Apr 23 17:52:25.654942 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.654922 2570 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:52:25.674542 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.674507 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" event={"ID":"7e7d0ba399fc779164fa43f2003d1693","Type":"ContainerStarted","Data":"b8e1726ef22058a51e41b36241acdf231124c5586245fe58197528f489e14d2a"} Apr 23 17:52:25.675366 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:25.675344 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal" event={"ID":"7d4522ab95c28bfad158f8d5f296881d","Type":"ContainerStarted","Data":"0b6321b2a2c518e741ca8f7915141d173a9aa28dd5a42aed46f1d18dcaa5cef6"} Apr 23 17:52:26.491712 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.491683 2570 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:52:26.508401 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.508381 2570 apiserver.go:52] "Watching apiserver" Apr 23 17:52:26.512210 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.512189 2570 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 23 17:52:26.512468 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.512446 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7nkkq","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal","kube-system/konnectivity-agent-t7tbj","kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t","openshift-cluster-node-tuning-operator/tuned-2w9gd"] Apr 23 17:52:26.516647 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.516626 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.516836 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.516805 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:26.518652 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.518612 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-vjs8g\"" Apr 23 17:52:26.518741 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.518666 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 23 17:52:26.518802 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.518762 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 23 17:52:26.518888 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.518617 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 23 17:52:26.519003 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.518928 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 23 17:52:26.519003 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.518986 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 23 17:52:26.519695 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.519675 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4nmd5\"" Apr 23 17:52:26.522908 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.522889 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.523322 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.523303 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.524636 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.524618 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 23 17:52:26.524855 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.524832 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-bq2xd\"" Apr 23 17:52:26.524952 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.524837 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 23 17:52:26.524952 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.524932 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 23 17:52:26.525123 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.525109 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-gzl8x\"" Apr 23 17:52:26.525233 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.525215 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 23 17:52:26.525308 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.525237 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:52:26.532962 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.532949 2570 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 17:52:26.545895 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.545880 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/ed7906bd-d603-4e6e-8e63-369f394f24b0-agent-certs\") pod \"konnectivity-agent-t7tbj\" (UID: \"ed7906bd-d603-4e6e-8e63-369f394f24b0\") " pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:26.546001 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.545911 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-kubelet-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.546001 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.545938 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-systemd\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546093 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546025 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-run\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546093 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546063 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-socket-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.546154 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546090 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-lib-modules\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546154 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546113 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/2d4343c2-343d-401d-99d8-75998e07c483-etc-tuned\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546154 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546133 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/65fafb1b-d34c-41dc-86f9-c06f2cf0487e-host\") pod \"node-ca-7nkkq\" (UID: \"65fafb1b-d34c-41dc-86f9-c06f2cf0487e\") " pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.546295 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546157 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-sys-fs\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.546295 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546180 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5l5g\" (UniqueName: \"kubernetes.io/projected/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-kube-api-access-f5l5g\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.546295 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546202 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-sysconfig\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546295 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546236 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-kubernetes\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546295 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546268 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65fafb1b-d34c-41dc-86f9-c06f2cf0487e-serviceca\") pod \"node-ca-7nkkq\" (UID: \"65fafb1b-d34c-41dc-86f9-c06f2cf0487e\") " pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546301 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9bxn\" (UniqueName: \"kubernetes.io/projected/65fafb1b-d34c-41dc-86f9-c06f2cf0487e-kube-api-access-w9bxn\") pod \"node-ca-7nkkq\" (UID: \"65fafb1b-d34c-41dc-86f9-c06f2cf0487e\") " pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546323 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-etc-selinux\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546340 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-var-lib-kubelet\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546373 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-device-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546402 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-sysctl-conf\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546442 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-sys\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546464 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbwp4\" (UniqueName: \"kubernetes.io/projected/2d4343c2-343d-401d-99d8-75998e07c483-kube-api-access-cbwp4\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546483 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-registration-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546505 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-modprobe-d\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546536 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/ed7906bd-d603-4e6e-8e63-369f394f24b0-konnectivity-ca\") pod \"konnectivity-agent-t7tbj\" (UID: \"ed7906bd-d603-4e6e-8e63-369f394f24b0\") " pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:26.546560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546562 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-sysctl-d\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.547131 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546584 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-host\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.547131 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.546608 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d4343c2-343d-401d-99d8-75998e07c483-tmp\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.582490 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.582461 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:47:25 +0000 UTC" deadline="2027-12-23 10:32:01.054387655 +0000 UTC" Apr 23 17:52:26.582490 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.582485 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14608h39m34.471905705s" Apr 23 17:52:26.646317 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646299 2570 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:52:26.646819 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646801 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/2d4343c2-343d-401d-99d8-75998e07c483-etc-tuned\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.646887 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646839 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/65fafb1b-d34c-41dc-86f9-c06f2cf0487e-host\") pod \"node-ca-7nkkq\" (UID: \"65fafb1b-d34c-41dc-86f9-c06f2cf0487e\") " pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.646887 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646857 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-sys-fs\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.646887 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646873 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f5l5g\" (UniqueName: \"kubernetes.io/projected/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-kube-api-access-f5l5g\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646888 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-sysconfig\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646892 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/65fafb1b-d34c-41dc-86f9-c06f2cf0487e-host\") pod \"node-ca-7nkkq\" (UID: \"65fafb1b-d34c-41dc-86f9-c06f2cf0487e\") " pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646904 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-kubernetes\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646925 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65fafb1b-d34c-41dc-86f9-c06f2cf0487e-serviceca\") pod \"node-ca-7nkkq\" (UID: \"65fafb1b-d34c-41dc-86f9-c06f2cf0487e\") " pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646949 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9bxn\" (UniqueName: \"kubernetes.io/projected/65fafb1b-d34c-41dc-86f9-c06f2cf0487e-kube-api-access-w9bxn\") pod \"node-ca-7nkkq\" (UID: \"65fafb1b-d34c-41dc-86f9-c06f2cf0487e\") " pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646956 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-sysconfig\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646980 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-etc-selinux\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646992 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-kubernetes\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647011 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-var-lib-kubelet\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.646949 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-sys-fs\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.647075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647052 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-device-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647084 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-sysctl-conf\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647080 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-var-lib-kubelet\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647089 2570 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647125 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-etc-selinux\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647137 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-device-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647165 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-sys\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647193 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cbwp4\" (UniqueName: \"kubernetes.io/projected/2d4343c2-343d-401d-99d8-75998e07c483-kube-api-access-cbwp4\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647224 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-registration-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647234 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-sys\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647238 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-sysctl-conf\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647249 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-modprobe-d\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647275 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/ed7906bd-d603-4e6e-8e63-369f394f24b0-konnectivity-ca\") pod \"konnectivity-agent-t7tbj\" (UID: \"ed7906bd-d603-4e6e-8e63-369f394f24b0\") " pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647287 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-registration-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647299 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-sysctl-d\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647322 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-host\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647344 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d4343c2-343d-401d-99d8-75998e07c483-tmp\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647360 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65fafb1b-d34c-41dc-86f9-c06f2cf0487e-serviceca\") pod \"node-ca-7nkkq\" (UID: \"65fafb1b-d34c-41dc-86f9-c06f2cf0487e\") " pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.647507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647369 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/ed7906bd-d603-4e6e-8e63-369f394f24b0-agent-certs\") pod \"konnectivity-agent-t7tbj\" (UID: \"ed7906bd-d603-4e6e-8e63-369f394f24b0\") " pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647389 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-modprobe-d\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647395 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-kubelet-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647406 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-host\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647474 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-sysctl-d\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647494 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-systemd\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647522 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-run\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647532 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-etc-systemd\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647495 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-kubelet-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647548 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-socket-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647575 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-lib-modules\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647606 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-run\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647667 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d4343c2-343d-401d-99d8-75998e07c483-lib-modules\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647716 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-socket-dir\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.648149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.647796 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/ed7906bd-d603-4e6e-8e63-369f394f24b0-konnectivity-ca\") pod \"konnectivity-agent-t7tbj\" (UID: \"ed7906bd-d603-4e6e-8e63-369f394f24b0\") " pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:26.649946 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.649740 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/2d4343c2-343d-401d-99d8-75998e07c483-etc-tuned\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.650023 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.649975 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/ed7906bd-d603-4e6e-8e63-369f394f24b0-agent-certs\") pod \"konnectivity-agent-t7tbj\" (UID: \"ed7906bd-d603-4e6e-8e63-369f394f24b0\") " pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:26.650023 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.649760 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d4343c2-343d-401d-99d8-75998e07c483-tmp\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.655267 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.655247 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9bxn\" (UniqueName: \"kubernetes.io/projected/65fafb1b-d34c-41dc-86f9-c06f2cf0487e-kube-api-access-w9bxn\") pod \"node-ca-7nkkq\" (UID: \"65fafb1b-d34c-41dc-86f9-c06f2cf0487e\") " pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.655493 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.655472 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbwp4\" (UniqueName: \"kubernetes.io/projected/2d4343c2-343d-401d-99d8-75998e07c483-kube-api-access-cbwp4\") pod \"tuned-2w9gd\" (UID: \"2d4343c2-343d-401d-99d8-75998e07c483\") " pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:26.655567 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.655494 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5l5g\" (UniqueName: \"kubernetes.io/projected/ceb5f21d-134d-4d0c-9924-32d81a4f5aab-kube-api-access-f5l5g\") pod \"aws-ebs-csi-driver-node-hp58t\" (UID: \"ceb5f21d-134d-4d0c-9924-32d81a4f5aab\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.830346 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.830219 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7nkkq" Apr 23 17:52:26.837097 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.837069 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:26.844569 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.844550 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" Apr 23 17:52:26.848097 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:26.848078 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" Apr 23 17:52:27.212958 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:27.212930 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65fafb1b_d34c_41dc_86f9_c06f2cf0487e.slice/crio-2ef1372c33489735213e9caceca0bad854cfe74fde72f42f695c996be1d05e50 WatchSource:0}: Error finding container 2ef1372c33489735213e9caceca0bad854cfe74fde72f42f695c996be1d05e50: Status 404 returned error can't find the container with id 2ef1372c33489735213e9caceca0bad854cfe74fde72f42f695c996be1d05e50 Apr 23 17:52:27.213865 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:27.213838 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d4343c2_343d_401d_99d8_75998e07c483.slice/crio-efe24c05950519b4d681d9cf4840b5d99172980919e661951956cff152697f71 WatchSource:0}: Error finding container efe24c05950519b4d681d9cf4840b5d99172980919e661951956cff152697f71: Status 404 returned error can't find the container with id efe24c05950519b4d681d9cf4840b5d99172980919e661951956cff152697f71 Apr 23 17:52:27.215722 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:27.215667 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceb5f21d_134d_4d0c_9924_32d81a4f5aab.slice/crio-4b2ed2e6393c6dbdb121eeb208dfb21ab8c9d3126143008c89cc27b7172af35d WatchSource:0}: Error finding container 4b2ed2e6393c6dbdb121eeb208dfb21ab8c9d3126143008c89cc27b7172af35d: Status 404 returned error can't find the container with id 4b2ed2e6393c6dbdb121eeb208dfb21ab8c9d3126143008c89cc27b7172af35d Apr 23 17:52:27.216629 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:52:27.216605 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7906bd_d603_4e6e_8e63_369f394f24b0.slice/crio-6269ef1c3b6524714c9ee6a69c2020f3bef0a5065b0a8641ad4236407238ab49 WatchSource:0}: Error finding container 6269ef1c3b6524714c9ee6a69c2020f3bef0a5065b0a8641ad4236407238ab49: Status 404 returned error can't find the container with id 6269ef1c3b6524714c9ee6a69c2020f3bef0a5065b0a8641ad4236407238ab49 Apr 23 17:52:27.582959 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:27.582881 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:47:25 +0000 UTC" deadline="2028-01-05 23:06:44.488986744 +0000 UTC" Apr 23 17:52:27.582959 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:27.582910 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14933h14m16.906079454s" Apr 23 17:52:27.682971 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:27.682916 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7nkkq" event={"ID":"65fafb1b-d34c-41dc-86f9-c06f2cf0487e","Type":"ContainerStarted","Data":"2ef1372c33489735213e9caceca0bad854cfe74fde72f42f695c996be1d05e50"} Apr 23 17:52:27.688207 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:27.688174 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal" event={"ID":"7d4522ab95c28bfad158f8d5f296881d","Type":"ContainerStarted","Data":"8c22de57ae53a4b871c240d356e1e72b02ebe0c0996ac896bb26649e2def0ae5"} Apr 23 17:52:27.690270 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:27.690187 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-t7tbj" event={"ID":"ed7906bd-d603-4e6e-8e63-369f394f24b0","Type":"ContainerStarted","Data":"6269ef1c3b6524714c9ee6a69c2020f3bef0a5065b0a8641ad4236407238ab49"} Apr 23 17:52:27.692926 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:27.692901 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" event={"ID":"ceb5f21d-134d-4d0c-9924-32d81a4f5aab","Type":"ContainerStarted","Data":"4b2ed2e6393c6dbdb121eeb208dfb21ab8c9d3126143008c89cc27b7172af35d"} Apr 23 17:52:27.695947 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:27.695902 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" event={"ID":"2d4343c2-343d-401d-99d8-75998e07c483","Type":"ContainerStarted","Data":"efe24c05950519b4d681d9cf4840b5d99172980919e661951956cff152697f71"} Apr 23 17:52:28.698920 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:28.698882 2570 generic.go:358] "Generic (PLEG): container finished" podID="7e7d0ba399fc779164fa43f2003d1693" containerID="3f99836816d9c67d6e7edeccce18bd644ed2443c355209d1d0b332f2f2696686" exitCode=0 Apr 23 17:52:28.699328 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:28.698977 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" event={"ID":"7e7d0ba399fc779164fa43f2003d1693","Type":"ContainerDied","Data":"3f99836816d9c67d6e7edeccce18bd644ed2443c355209d1d0b332f2f2696686"} Apr 23 17:52:28.712648 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:28.712604 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-209.ec2.internal" podStartSLOduration=3.712592528 podStartE2EDuration="3.712592528s" podCreationTimestamp="2026-04-23 17:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:52:27.700991814 +0000 UTC m=+3.641005957" watchObservedRunningTime="2026-04-23 17:52:28.712592528 +0000 UTC m=+4.652606667" Apr 23 17:52:31.705142 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:31.704942 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-t7tbj" event={"ID":"ed7906bd-d603-4e6e-8e63-369f394f24b0","Type":"ContainerStarted","Data":"c6c2d2476e7beafbbbc897b86a74066ed0410ef834f508ca0ee441cb410467a7"} Apr 23 17:52:31.709113 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:31.709076 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" event={"ID":"ceb5f21d-134d-4d0c-9924-32d81a4f5aab","Type":"ContainerStarted","Data":"070a94c9de9cc951bbd06a970b39ea75bd0f009e97959cfb4022b16caa310924"} Apr 23 17:52:31.710481 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:31.710446 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" event={"ID":"2d4343c2-343d-401d-99d8-75998e07c483","Type":"ContainerStarted","Data":"536103b336bdecf24f5a3c6214cc3a9c0b53101bb88dce49fd510b85b19c7176"} Apr 23 17:52:31.711808 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:31.711785 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7nkkq" event={"ID":"65fafb1b-d34c-41dc-86f9-c06f2cf0487e","Type":"ContainerStarted","Data":"71ca868625a73ccd7517f415f279d488f6398ff15b4c096cd7cc878652d77232"} Apr 23 17:52:31.713490 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:31.713466 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" event={"ID":"7e7d0ba399fc779164fa43f2003d1693","Type":"ContainerStarted","Data":"7cf0718f7fa540c181d3ef58e84a875fb067b6dc2bb189253cbb403591f2d2ca"} Apr 23 17:52:31.731838 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:31.731794 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-t7tbj" podStartSLOduration=3.785555756 podStartE2EDuration="7.731777784s" podCreationTimestamp="2026-04-23 17:52:24 +0000 UTC" firstStartedPulling="2026-04-23 17:52:27.218663387 +0000 UTC m=+3.158677505" lastFinishedPulling="2026-04-23 17:52:31.164885409 +0000 UTC m=+7.104899533" observedRunningTime="2026-04-23 17:52:31.718429501 +0000 UTC m=+7.658443635" watchObservedRunningTime="2026-04-23 17:52:31.731777784 +0000 UTC m=+7.671791924" Apr 23 17:52:31.744992 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:31.744953 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-2w9gd" podStartSLOduration=3.732305128 podStartE2EDuration="7.744944113s" podCreationTimestamp="2026-04-23 17:52:24 +0000 UTC" firstStartedPulling="2026-04-23 17:52:27.216135787 +0000 UTC m=+3.156149918" lastFinishedPulling="2026-04-23 17:52:31.228774784 +0000 UTC m=+7.168788903" observedRunningTime="2026-04-23 17:52:31.731758743 +0000 UTC m=+7.671772887" watchObservedRunningTime="2026-04-23 17:52:31.744944113 +0000 UTC m=+7.684958254" Apr 23 17:52:31.745112 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:31.745083 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-209.ec2.internal" podStartSLOduration=6.74507654 podStartE2EDuration="6.74507654s" podCreationTimestamp="2026-04-23 17:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:52:31.744797805 +0000 UTC m=+7.684811946" watchObservedRunningTime="2026-04-23 17:52:31.74507654 +0000 UTC m=+7.685090681" Apr 23 17:52:31.759195 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:31.759153 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7nkkq" podStartSLOduration=3.806742959 podStartE2EDuration="7.759142504s" podCreationTimestamp="2026-04-23 17:52:24 +0000 UTC" firstStartedPulling="2026-04-23 17:52:27.215509243 +0000 UTC m=+3.155523362" lastFinishedPulling="2026-04-23 17:52:31.167908772 +0000 UTC m=+7.107922907" observedRunningTime="2026-04-23 17:52:31.759018312 +0000 UTC m=+7.699032451" watchObservedRunningTime="2026-04-23 17:52:31.759142504 +0000 UTC m=+7.699156643" Apr 23 17:52:32.216463 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:32.216442 2570 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 23 17:52:32.606916 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:32.606828 2570 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-23T17:52:32.216458458Z","UUID":"488e0904-c4ec-448d-a744-d38fb3b3e27c","Handler":null,"Name":"","Endpoint":""} Apr 23 17:52:32.609048 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:32.609024 2570 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 23 17:52:32.609048 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:32.609051 2570 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 23 17:52:32.717062 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:32.717015 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" event={"ID":"ceb5f21d-134d-4d0c-9924-32d81a4f5aab","Type":"ContainerStarted","Data":"cc118dec41b1f59800aab6c9d55aec189fd9cc69e5580610e91dfc63fd2e4e9f"} Apr 23 17:52:33.658483 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:33.658459 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:33.720625 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:33.720591 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" event={"ID":"ceb5f21d-134d-4d0c-9924-32d81a4f5aab","Type":"ContainerStarted","Data":"94ddf31210d011b87bc6d568e490e8e697b2452a8f8b88108f14efc7580a7bda"} Apr 23 17:52:33.735956 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:33.735907 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-hp58t" podStartSLOduration=3.929828116 podStartE2EDuration="9.735894203s" podCreationTimestamp="2026-04-23 17:52:24 +0000 UTC" firstStartedPulling="2026-04-23 17:52:27.218006004 +0000 UTC m=+3.158020138" lastFinishedPulling="2026-04-23 17:52:33.024072092 +0000 UTC m=+8.964086225" observedRunningTime="2026-04-23 17:52:33.735811724 +0000 UTC m=+9.675825864" watchObservedRunningTime="2026-04-23 17:52:33.735894203 +0000 UTC m=+9.675908342" Apr 23 17:52:33.736869 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:33.736848 2570 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:33.737358 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:33.737339 2570 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:34.722296 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:34.722262 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-t7tbj" Apr 23 17:52:35.907290 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:35.907254 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-mlz2c"] Apr 23 17:52:35.910117 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:35.910093 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:35.910238 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:35.910173 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:36.001953 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:36.001923 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:36.001953 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:36.001956 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-kubelet-config\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:36.002119 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:36.001981 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-dbus\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:36.103155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:36.103132 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-dbus\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:36.103292 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:36.103164 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:36.103292 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:36.103186 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-kubelet-config\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:36.103385 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:36.103309 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:36.103385 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:36.103341 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-kubelet-config\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:36.103385 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:36.103375 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret podName:90608d2c-b6cd-4fca-968e-9fc7cbf593f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:52:36.603355865 +0000 UTC m=+12.543369982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret") pod "global-pull-secret-syncer-mlz2c" (UID: "90608d2c-b6cd-4fca-968e-9fc7cbf593f8") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:36.103539 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:36.103439 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-dbus\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:36.605208 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:36.605185 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:36.605332 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:36.605267 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:36.605332 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:36.605310 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret podName:90608d2c-b6cd-4fca-968e-9fc7cbf593f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:52:37.605298192 +0000 UTC m=+13.545312309 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret") pod "global-pull-secret-syncer-mlz2c" (UID: "90608d2c-b6cd-4fca-968e-9fc7cbf593f8") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:37.613099 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:37.613059 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:37.613576 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:37.613190 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:37.613576 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:37.613263 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret podName:90608d2c-b6cd-4fca-968e-9fc7cbf593f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:52:39.613241267 +0000 UTC m=+15.553255403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret") pod "global-pull-secret-syncer-mlz2c" (UID: "90608d2c-b6cd-4fca-968e-9fc7cbf593f8") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:37.672129 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:37.672100 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:37.672235 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:37.672208 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:39.626873 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:39.626842 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:39.627358 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:39.626948 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:39.627358 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:39.627007 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret podName:90608d2c-b6cd-4fca-968e-9fc7cbf593f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:52:43.626985189 +0000 UTC m=+19.566999307 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret") pod "global-pull-secret-syncer-mlz2c" (UID: "90608d2c-b6cd-4fca-968e-9fc7cbf593f8") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:39.672116 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:39.672084 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:39.672215 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:39.672183 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:41.671926 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:41.671893 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:41.672444 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:41.672032 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:43.657034 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:43.657000 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:43.657434 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:43.657106 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:43.657434 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:43.657161 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret podName:90608d2c-b6cd-4fca-968e-9fc7cbf593f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:52:51.657143289 +0000 UTC m=+27.597157410 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret") pod "global-pull-secret-syncer-mlz2c" (UID: "90608d2c-b6cd-4fca-968e-9fc7cbf593f8") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:43.672526 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:43.672502 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:43.672612 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:43.672593 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:45.672539 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:45.672504 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:45.673008 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:45.672602 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:47.672074 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:47.672042 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:47.672516 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:47.672142 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:49.672404 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:49.672374 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:49.672858 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:49.672495 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:51.672016 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:51.671984 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:51.672385 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:51.672088 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:51.709774 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:51.709747 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:51.709853 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:51.709839 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:51.709899 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:51.709888 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret podName:90608d2c-b6cd-4fca-968e-9fc7cbf593f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:53:07.70987357 +0000 UTC m=+43.649887688 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret") pod "global-pull-secret-syncer-mlz2c" (UID: "90608d2c-b6cd-4fca-968e-9fc7cbf593f8") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:52:53.671880 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:53.671851 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:53.672240 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:53.671953 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:55.672789 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:55.672751 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:55.673208 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:55.672857 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:57.672075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:57.672033 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:57.672608 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:57.672146 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:52:59.672330 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:52:59.672300 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:52:59.672735 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:52:59.672393 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:01.672434 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:01.672395 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:01.672845 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:01.672514 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:03.672020 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:03.671992 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:03.672373 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:03.672079 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:05.672045 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:05.672000 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:05.672601 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:05.672130 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:07.672096 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:07.672052 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:07.672546 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:07.672166 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:07.805038 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:07.805008 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:07.805187 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:07.805120 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:53:07.805187 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:07.805172 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret podName:90608d2c-b6cd-4fca-968e-9fc7cbf593f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:53:39.805155055 +0000 UTC m=+75.745169176 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret") pod "global-pull-secret-syncer-mlz2c" (UID: "90608d2c-b6cd-4fca-968e-9fc7cbf593f8") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:53:09.672615 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:09.672579 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:09.673094 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:09.672705 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:11.672917 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:11.672875 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:11.673329 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:11.672983 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:13.672743 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:13.672708 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:13.673138 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:13.672806 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:15.672497 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:15.672457 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:15.672900 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:15.672583 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:17.672149 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:17.672118 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:17.672547 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:17.672215 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:19.672366 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:19.672331 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:19.672755 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:19.672444 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:21.672088 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:21.672056 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:21.672517 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:21.672155 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:23.672334 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:23.672292 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:23.672734 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:23.672396 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:25.672300 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:25.672269 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:25.672695 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:25.672367 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:27.672045 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:27.671832 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:27.672492 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:27.672113 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:29.672394 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:29.672359 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:29.672854 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:29.672470 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:31.672854 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:31.672819 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:31.673231 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:31.672924 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:33.672760 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:33.672730 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:33.673128 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:33.672818 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:35.672755 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:35.672711 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:35.673178 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:35.672822 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:37.672829 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:37.672794 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:37.673242 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:37.672896 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:39.672116 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:39.672077 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:39.672652 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:39.672181 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:39.816535 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:39.816503 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:39.816681 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:39.816661 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:53:39.816753 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:39.816744 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret podName:90608d2c-b6cd-4fca-968e-9fc7cbf593f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:43.816719255 +0000 UTC m=+139.756733414 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret") pod "global-pull-secret-syncer-mlz2c" (UID: "90608d2c-b6cd-4fca-968e-9fc7cbf593f8") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:53:41.672514 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:41.672477 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:41.672912 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:41.672579 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:43.672877 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:43.672772 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:43.672877 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:43.672858 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:45.672320 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:45.672281 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:45.672822 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:45.672460 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:47.672636 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:47.672602 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:47.673061 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:47.672694 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:49.672352 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:49.672315 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:49.672765 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:49.672441 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:51.672296 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:51.672265 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:51.672713 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:51.672368 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:53.672061 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:53.672021 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:53.672449 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:53.672124 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:55.672552 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:55.672515 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:55.672976 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:55.672618 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:57.672141 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:57.672100 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:57.672567 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:57.672206 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:53:59.672147 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:53:59.672116 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:53:59.672576 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:53:59.672217 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:01.672775 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:01.672732 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:01.673242 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:01.672842 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:03.672753 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:03.672722 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:03.673089 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:03.672821 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:05.672843 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:05.672809 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:05.673246 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:05.672909 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:07.672750 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:07.672704 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:07.673174 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:07.672813 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:09.672525 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:09.672494 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:09.672920 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:09.672597 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:11.672237 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:11.672201 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:11.672677 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:11.672315 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:13.672091 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:13.672056 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:13.672597 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:13.672152 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:15.672014 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:15.671974 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:15.672411 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:15.672082 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:17.672059 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:17.672026 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:17.672497 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:17.672126 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:19.672133 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:19.672104 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:19.672596 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:19.672195 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:21.672332 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:21.672302 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:21.672737 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:21.672398 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:23.672218 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:23.672177 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:23.672681 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:23.672302 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:24.540391 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:24.540362 2570 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Apr 23 17:54:24.620859 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:24.620829 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:25.672190 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:25.672163 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:25.672601 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:25.672276 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:27.672872 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:27.672826 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:27.673294 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:27.672944 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:29.621303 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:29.621236 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:29.672889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:29.672866 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:29.673002 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:29.672980 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:31.672542 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:31.672498 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:31.672980 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:31.672644 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:33.672403 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:33.672372 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:33.672745 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:33.672489 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:34.621952 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:34.621912 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:35.672561 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:35.672529 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:35.672951 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:35.672638 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:37.672318 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:37.672276 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:37.672760 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:37.672391 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:39.622896 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:39.622856 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:39.672233 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:39.672213 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:39.672341 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:39.672320 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:41.672805 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:41.672773 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:41.673192 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:41.672881 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:43.672896 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:43.672861 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:43.673282 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:43.672977 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:43.909117 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:43.909091 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:43.909250 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:43.909183 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:43.909250 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:43.909233 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret podName:90608d2c-b6cd-4fca-968e-9fc7cbf593f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:45.909219068 +0000 UTC m=+261.849233186 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret") pod "global-pull-secret-syncer-mlz2c" (UID: "90608d2c-b6cd-4fca-968e-9fc7cbf593f8") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:44.623403 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:44.623366 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:45.672251 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:45.672222 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:45.672641 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:45.672321 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:47.672312 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:47.672270 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:47.672720 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:47.672379 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:49.624076 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:49.624037 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:49.672106 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:49.672074 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:49.672226 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:49.672202 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:51.672325 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:51.672283 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:51.672897 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:51.672447 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:53.672614 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:53.672575 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:53.672997 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:53.672698 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:54.625379 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:54.625347 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:55.672045 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:55.672006 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:55.672477 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:55.672125 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:57.672469 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:57.672431 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:57.672896 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:57.672551 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:54:59.626694 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:59.626654 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:59.672909 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:54:59.672881 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:54:59.673008 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:54:59.672989 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:01.672019 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:01.671975 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:01.672512 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:01.672273 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:02.821127 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.821091 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8pvs8"] Apr 23 17:55:02.823705 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.823682 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.830796 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.830773 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 23 17:55:02.830989 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.830956 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 23 17:55:02.831064 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.831010 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 23 17:55:02.831492 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.831468 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-4kn7n\"" Apr 23 17:55:02.831599 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.831492 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 23 17:55:02.904924 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.904901 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-system-cni-dir\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905037 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.904930 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-conf-dir\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905037 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.904969 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j8sj\" (UniqueName: \"kubernetes.io/projected/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-kube-api-access-2j8sj\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905037 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905013 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-os-release\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905150 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905055 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-run-k8s-cni-cncf-io\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905150 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905078 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-run-multus-certs\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905150 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905106 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-etc-kubernetes\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905150 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905143 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-cnibin\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905285 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905162 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-run-netns\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905285 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905185 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-socket-dir-parent\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905285 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905200 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-var-lib-kubelet\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905285 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905230 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-var-lib-cni-bin\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905285 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905252 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-hostroot\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905285 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905269 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-daemon-config\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905502 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905286 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-cni-dir\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905502 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905304 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-cni-binary-copy\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:02.905502 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:02.905331 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-var-lib-cni-multus\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006253 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006223 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-daemon-config\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006366 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006265 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-cni-dir\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006366 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006288 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-cni-binary-copy\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006366 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006313 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-var-lib-cni-multus\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006366 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006342 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-system-cni-dir\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006577 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006396 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-cni-dir\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006577 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006402 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-system-cni-dir\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006577 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006431 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-var-lib-cni-multus\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006577 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006473 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-conf-dir\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006577 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006528 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2j8sj\" (UniqueName: \"kubernetes.io/projected/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-kube-api-access-2j8sj\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006577 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006550 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-conf-dir\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006577 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006555 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-os-release\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006610 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-run-k8s-cni-cncf-io\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006623 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-os-release\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006638 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-run-multus-certs\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006668 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-etc-kubernetes\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006702 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-run-multus-certs\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006703 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-cnibin\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006723 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-etc-kubernetes\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006672 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-run-k8s-cni-cncf-io\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006744 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-run-netns\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006755 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-cnibin\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006775 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-socket-dir-parent\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006797 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-var-lib-kubelet\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006802 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-run-netns\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006827 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-var-lib-cni-bin\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006847 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-hostroot\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006867 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-socket-dir-parent\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006884 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-var-lib-kubelet\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.006889 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006900 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-multus-daemon-config\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.007537 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006893 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-hostroot\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.007537 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.006949 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-host-var-lib-cni-bin\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.007537 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.007019 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-cni-binary-copy\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.019839 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.019811 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-w6774"] Apr 23 17:55:03.021157 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.021128 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j8sj\" (UniqueName: \"kubernetes.io/projected/60e98a1f-ca0f-4e88-883e-76057a6fcbe8-kube-api-access-2j8sj\") pod \"multus-8pvs8\" (UID: \"60e98a1f-ca0f-4e88-883e-76057a6fcbe8\") " pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.022492 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.022476 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.024825 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.024801 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 23 17:55:03.024934 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.024872 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-sv4gq\"" Apr 23 17:55:03.024934 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.024887 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 23 17:55:03.107993 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.107938 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-system-cni-dir\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.107993 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.107969 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-cnibin\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.107993 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.107986 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-os-release\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.108155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.108005 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9ccp\" (UniqueName: \"kubernetes.io/projected/199c71c7-b484-42bc-b6a9-7f390ddb6768-kube-api-access-m9ccp\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.108155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.108026 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/199c71c7-b484-42bc-b6a9-7f390ddb6768-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.108155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.108101 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/199c71c7-b484-42bc-b6a9-7f390ddb6768-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.108155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.108117 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/199c71c7-b484-42bc-b6a9-7f390ddb6768-cni-binary-copy\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.108155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.108133 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.132577 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.132556 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8pvs8" Apr 23 17:55:03.141325 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:55:03.141301 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60e98a1f_ca0f_4e88_883e_76057a6fcbe8.slice/crio-3c595bff16fc1f5742dff59ca553753326e5fab389171d37121a913744df55e1 WatchSource:0}: Error finding container 3c595bff16fc1f5742dff59ca553753326e5fab389171d37121a913744df55e1: Status 404 returned error can't find the container with id 3c595bff16fc1f5742dff59ca553753326e5fab389171d37121a913744df55e1 Apr 23 17:55:03.208561 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208539 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/199c71c7-b484-42bc-b6a9-7f390ddb6768-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.208649 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208565 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/199c71c7-b484-42bc-b6a9-7f390ddb6768-cni-binary-copy\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.208649 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208586 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.208649 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208636 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-system-cni-dir\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.208767 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208672 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-cnibin\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.208767 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208696 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-os-release\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.208767 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208721 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m9ccp\" (UniqueName: \"kubernetes.io/projected/199c71c7-b484-42bc-b6a9-7f390ddb6768-kube-api-access-m9ccp\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.208767 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208742 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.208767 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208747 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/199c71c7-b484-42bc-b6a9-7f390ddb6768-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.208767 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208736 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-system-cni-dir\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.209066 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208812 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-os-release\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.209066 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.208814 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/199c71c7-b484-42bc-b6a9-7f390ddb6768-cnibin\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.209167 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.209071 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/199c71c7-b484-42bc-b6a9-7f390ddb6768-cni-binary-copy\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.209167 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.209147 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/199c71c7-b484-42bc-b6a9-7f390ddb6768-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.209357 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.209335 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/199c71c7-b484-42bc-b6a9-7f390ddb6768-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.217024 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.217002 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9ccp\" (UniqueName: \"kubernetes.io/projected/199c71c7-b484-42bc-b6a9-7f390ddb6768-kube-api-access-m9ccp\") pod \"multus-additional-cni-plugins-w6774\" (UID: \"199c71c7-b484-42bc-b6a9-7f390ddb6768\") " pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.330519 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.330495 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-w6774" Apr 23 17:55:03.337865 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:55:03.337841 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod199c71c7_b484_42bc_b6a9_7f390ddb6768.slice/crio-7df79e121e55a16ed1ca6062aa6bf63db21722c80fccd9c821610ff763a91d5f WatchSource:0}: Error finding container 7df79e121e55a16ed1ca6062aa6bf63db21722c80fccd9c821610ff763a91d5f: Status 404 returned error can't find the container with id 7df79e121e55a16ed1ca6062aa6bf63db21722c80fccd9c821610ff763a91d5f Apr 23 17:55:03.672347 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.672313 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:03.672519 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:03.672464 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:03.785548 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.785518 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-757d7"] Apr 23 17:55:03.788498 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.788135 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:03.788498 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:03.788216 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:03.913324 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.913267 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:03.913766 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.913342 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kcx4\" (UniqueName: \"kubernetes.io/projected/c339fa02-2445-42c2-b7ee-5388fb338129-kube-api-access-5kcx4\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:03.924643 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.924565 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6774" event={"ID":"199c71c7-b484-42bc-b6a9-7f390ddb6768","Type":"ContainerStarted","Data":"7df79e121e55a16ed1ca6062aa6bf63db21722c80fccd9c821610ff763a91d5f"} Apr 23 17:55:03.925773 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:03.925744 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8pvs8" event={"ID":"60e98a1f-ca0f-4e88-883e-76057a6fcbe8","Type":"ContainerStarted","Data":"3c595bff16fc1f5742dff59ca553753326e5fab389171d37121a913744df55e1"} Apr 23 17:55:04.013614 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:04.013586 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5kcx4\" (UniqueName: \"kubernetes.io/projected/c339fa02-2445-42c2-b7ee-5388fb338129-kube-api-access-5kcx4\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:04.013746 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:04.013660 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:04.013805 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:04.013777 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:04.013859 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:04.013831 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs podName:c339fa02-2445-42c2-b7ee-5388fb338129 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:04.513815857 +0000 UTC m=+160.453829975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs") pod "network-metrics-daemon-757d7" (UID: "c339fa02-2445-42c2-b7ee-5388fb338129") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:04.023881 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:04.023793 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kcx4\" (UniqueName: \"kubernetes.io/projected/c339fa02-2445-42c2-b7ee-5388fb338129-kube-api-access-5kcx4\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:04.516874 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:04.516832 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:04.517077 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:04.516988 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:04.517077 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:04.517056 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs podName:c339fa02-2445-42c2-b7ee-5388fb338129 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:05.517037565 +0000 UTC m=+161.457051683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs") pod "network-metrics-daemon-757d7" (UID: "c339fa02-2445-42c2-b7ee-5388fb338129") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:04.627135 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:04.627085 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:05.522886 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:05.522854 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:05.523293 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:05.522957 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:05.523293 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:05.523023 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs podName:c339fa02-2445-42c2-b7ee-5388fb338129 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:07.523007624 +0000 UTC m=+163.463021741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs") pod "network-metrics-daemon-757d7" (UID: "c339fa02-2445-42c2-b7ee-5388fb338129") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:05.672541 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:05.672508 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:05.672694 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:05.672510 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:05.672694 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:05.672627 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:05.672796 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:05.672724 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:05.931724 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:05.931628 2570 generic.go:358] "Generic (PLEG): container finished" podID="199c71c7-b484-42bc-b6a9-7f390ddb6768" containerID="5a2ec59249e0cc8aeb1c178798dfc9b76fd18f731f8c0b42b3edcc89dd48c0ce" exitCode=0 Apr 23 17:55:05.931724 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:05.931665 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6774" event={"ID":"199c71c7-b484-42bc-b6a9-7f390ddb6768","Type":"ContainerDied","Data":"5a2ec59249e0cc8aeb1c178798dfc9b76fd18f731f8c0b42b3edcc89dd48c0ce"} Apr 23 17:55:07.531547 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:07.531497 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:07.532025 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:07.531635 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:07.532025 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:07.531711 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs podName:c339fa02-2445-42c2-b7ee-5388fb338129 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:11.531688026 +0000 UTC m=+167.471702161 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs") pod "network-metrics-daemon-757d7" (UID: "c339fa02-2445-42c2-b7ee-5388fb338129") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:07.672377 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:07.672289 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:07.672603 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:07.672289 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:07.672603 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:07.672510 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:07.672603 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:07.672590 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:09.628498 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:09.628453 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:09.671974 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:09.671944 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:09.672150 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:09.672132 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:09.672268 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:09.672240 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:09.672384 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:09.672345 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:11.554117 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:11.554078 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:11.554553 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:11.554223 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:11.554553 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:11.554291 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs podName:c339fa02-2445-42c2-b7ee-5388fb338129 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:19.554276188 +0000 UTC m=+175.494290306 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs") pod "network-metrics-daemon-757d7" (UID: "c339fa02-2445-42c2-b7ee-5388fb338129") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:11.672312 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:11.672281 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:11.672492 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:11.672285 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:11.672492 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:11.672404 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:11.672613 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:11.672519 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:13.617334 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.617306 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6k42s"] Apr 23 17:55:13.620965 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.620948 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.623316 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.623292 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 23 17:55:13.623316 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.623309 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 23 17:55:13.623585 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.623568 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 23 17:55:13.623641 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.623567 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 23 17:55:13.623641 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.623595 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 23 17:55:13.623985 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.623967 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-kznb8\"" Apr 23 17:55:13.624088 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.623995 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 23 17:55:13.672411 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.672391 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:13.672507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.672391 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:13.672549 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:13.672505 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:13.672589 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:13.672571 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:13.768854 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.768836 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-var-lib-openvswitch\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.768914 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.768864 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-etc-openvswitch\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.768914 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.768882 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-ovn-kubernetes\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.768914 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.768902 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-bin\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769012 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.768949 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-log-socket\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769012 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.768982 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769012 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769003 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-openvswitch\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769099 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769046 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-node-log\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769099 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769072 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovn-node-metrics-cert\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769161 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769105 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-netns\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769161 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769122 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-systemd\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769161 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769152 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-slash\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769244 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769189 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-kubelet\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769244 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769204 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-systemd-units\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769244 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769232 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-ovn\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769332 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769253 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l47cj\" (UniqueName: \"kubernetes.io/projected/e3ff089f-6e59-4e40-9963-8d8eee22970b-kube-api-access-l47cj\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769332 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769271 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-config\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769332 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769306 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-script-lib\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769449 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769334 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-netd\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.769449 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.769359 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-env-overrides\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870215 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870152 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-env-overrides\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870215 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870192 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-var-lib-openvswitch\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870215 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870209 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-etc-openvswitch\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870215 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870224 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-ovn-kubernetes\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870244 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-bin\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870261 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-log-socket\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870274 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-var-lib-openvswitch\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870293 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870298 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-etc-openvswitch\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870302 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-ovn-kubernetes\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870329 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-bin\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870343 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-log-socket\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870339 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870391 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-openvswitch\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870446 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-node-log\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870471 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovn-node-metrics-cert\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.870499 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870501 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-netns\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870514 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-node-log\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870505 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-openvswitch\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870528 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-systemd\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870570 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-netns\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870570 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-systemd\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870594 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-slash\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870626 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-kubelet\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870652 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-systemd-units\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870670 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-kubelet\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870670 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-slash\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870681 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-ovn\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870698 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-systemd-units\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870714 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l47cj\" (UniqueName: \"kubernetes.io/projected/e3ff089f-6e59-4e40-9963-8d8eee22970b-kube-api-access-l47cj\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870724 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-ovn\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870736 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-config\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870755 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-script-lib\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870772 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-netd\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.870843 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-netd\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871576 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.871148 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-config\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871576 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.871267 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-script-lib\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.871576 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.871324 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-env-overrides\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.873235 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.873216 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovn-node-metrics-cert\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.879743 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.879724 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l47cj\" (UniqueName: \"kubernetes.io/projected/e3ff089f-6e59-4e40-9963-8d8eee22970b-kube-api-access-l47cj\") pod \"ovnkube-node-6k42s\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.929608 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.929588 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:13.936634 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:55:13.936608 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3ff089f_6e59_4e40_9963_8d8eee22970b.slice/crio-077afafb9e40fd040af29d46e048f5cc8bf9efbcd6d83af0231c17e52b5c796e WatchSource:0}: Error finding container 077afafb9e40fd040af29d46e048f5cc8bf9efbcd6d83af0231c17e52b5c796e: Status 404 returned error can't find the container with id 077afafb9e40fd040af29d46e048f5cc8bf9efbcd6d83af0231c17e52b5c796e Apr 23 17:55:13.947024 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.947001 2570 generic.go:358] "Generic (PLEG): container finished" podID="199c71c7-b484-42bc-b6a9-7f390ddb6768" containerID="55f41c466f5e22dc29b84c7a88a94af7429f4558c0f8d7d0012384a73dbb0dca" exitCode=0 Apr 23 17:55:13.947120 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.947054 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6774" event={"ID":"199c71c7-b484-42bc-b6a9-7f390ddb6768","Type":"ContainerDied","Data":"55f41c466f5e22dc29b84c7a88a94af7429f4558c0f8d7d0012384a73dbb0dca"} Apr 23 17:55:13.948351 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.948230 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8pvs8" event={"ID":"60e98a1f-ca0f-4e88-883e-76057a6fcbe8","Type":"ContainerStarted","Data":"e1b1cbeb621760a1ede9e32f68d227c8cdcc2a233aa96f1b067fb6a158c17ba8"} Apr 23 17:55:13.949240 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.949221 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerStarted","Data":"077afafb9e40fd040af29d46e048f5cc8bf9efbcd6d83af0231c17e52b5c796e"} Apr 23 17:55:13.978824 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:13.978782 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8pvs8" podStartSLOduration=1.8417778089999999 podStartE2EDuration="11.978772794s" podCreationTimestamp="2026-04-23 17:55:02 +0000 UTC" firstStartedPulling="2026-04-23 17:55:03.142691208 +0000 UTC m=+159.082705325" lastFinishedPulling="2026-04-23 17:55:13.279686192 +0000 UTC m=+169.219700310" observedRunningTime="2026-04-23 17:55:13.978301253 +0000 UTC m=+169.918315391" watchObservedRunningTime="2026-04-23 17:55:13.978772794 +0000 UTC m=+169.918786933" Apr 23 17:55:14.629106 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:14.629066 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:14.953143 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:14.952927 2570 generic.go:358] "Generic (PLEG): container finished" podID="199c71c7-b484-42bc-b6a9-7f390ddb6768" containerID="48dfda5a13839c4a9a4c67e7ab6717a692e51155034c3cca655a92aa991d222f" exitCode=0 Apr 23 17:55:14.953273 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:14.953048 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6774" event={"ID":"199c71c7-b484-42bc-b6a9-7f390ddb6768","Type":"ContainerDied","Data":"48dfda5a13839c4a9a4c67e7ab6717a692e51155034c3cca655a92aa991d222f"} Apr 23 17:55:15.672312 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:15.672234 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:15.672774 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:15.672250 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:15.672774 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:15.672375 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:15.672774 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:15.672456 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:16.597092 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:16.597062 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-jhssk"] Apr 23 17:55:16.599917 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:16.599890 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:16.600022 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:16.599991 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:16.687221 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:16.687193 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsq24\" (UniqueName: \"kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24\") pod \"network-check-target-jhssk\" (UID: \"2618335a-7a85-4193-a71d-15eaab4cb7f1\") " pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:16.787924 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:16.787895 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsq24\" (UniqueName: \"kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24\") pod \"network-check-target-jhssk\" (UID: \"2618335a-7a85-4193-a71d-15eaab4cb7f1\") " pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:16.794592 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:16.794569 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:16.794700 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:16.794596 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:16.794700 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:16.794612 2570 projected.go:194] Error preparing data for projected volume kube-api-access-gsq24 for pod openshift-network-diagnostics/network-check-target-jhssk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:16.794700 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:16.794683 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24 podName:2618335a-7a85-4193-a71d-15eaab4cb7f1 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:17.294663025 +0000 UTC m=+173.234677147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gsq24" (UniqueName: "kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24") pod "network-check-target-jhssk" (UID: "2618335a-7a85-4193-a71d-15eaab4cb7f1") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:16.962426 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:16.962389 2570 generic.go:358] "Generic (PLEG): container finished" podID="199c71c7-b484-42bc-b6a9-7f390ddb6768" containerID="f0083fae9dac99cbc778458eb7edecfbfd3f6e5835f5a0c0dc43c6551dd82ee8" exitCode=0 Apr 23 17:55:16.962588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:16.962455 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6774" event={"ID":"199c71c7-b484-42bc-b6a9-7f390ddb6768","Type":"ContainerDied","Data":"f0083fae9dac99cbc778458eb7edecfbfd3f6e5835f5a0c0dc43c6551dd82ee8"} Apr 23 17:55:17.392126 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:17.392035 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsq24\" (UniqueName: \"kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24\") pod \"network-check-target-jhssk\" (UID: \"2618335a-7a85-4193-a71d-15eaab4cb7f1\") " pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:17.392277 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:17.392200 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:17.392277 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:17.392221 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:17.392277 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:17.392235 2570 projected.go:194] Error preparing data for projected volume kube-api-access-gsq24 for pod openshift-network-diagnostics/network-check-target-jhssk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:17.392390 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:17.392310 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24 podName:2618335a-7a85-4193-a71d-15eaab4cb7f1 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:18.392291517 +0000 UTC m=+174.332305642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gsq24" (UniqueName: "kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24") pod "network-check-target-jhssk" (UID: "2618335a-7a85-4193-a71d-15eaab4cb7f1") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:17.671969 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:17.671942 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:17.672236 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:17.671942 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:17.672236 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:17.672080 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:17.672236 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:17.672142 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:18.400239 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:18.400201 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsq24\" (UniqueName: \"kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24\") pod \"network-check-target-jhssk\" (UID: \"2618335a-7a85-4193-a71d-15eaab4cb7f1\") " pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:18.400652 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:18.400375 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:18.400652 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:18.400398 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:18.400652 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:18.400428 2570 projected.go:194] Error preparing data for projected volume kube-api-access-gsq24 for pod openshift-network-diagnostics/network-check-target-jhssk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:18.400652 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:18.400487 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24 podName:2618335a-7a85-4193-a71d-15eaab4cb7f1 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:20.400468411 +0000 UTC m=+176.340482553 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gsq24" (UniqueName: "kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24") pod "network-check-target-jhssk" (UID: "2618335a-7a85-4193-a71d-15eaab4cb7f1") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:18.672694 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:18.672655 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:18.672856 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:18.672776 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:19.607885 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:19.607847 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:19.608335 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:19.608010 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:19.608335 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:19.608081 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs podName:c339fa02-2445-42c2-b7ee-5388fb338129 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:35.608066818 +0000 UTC m=+191.548080937 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs") pod "network-metrics-daemon-757d7" (UID: "c339fa02-2445-42c2-b7ee-5388fb338129") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:19.630499 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:19.630455 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:19.672222 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:19.672141 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:19.672354 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:19.672273 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:19.672509 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:19.672141 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:19.672610 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:19.672593 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:19.997200 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:19.997141 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-np628"] Apr 23 17:55:20.035530 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.035502 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.038029 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.038006 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 23 17:55:20.038135 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.038013 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 23 17:55:20.038751 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.038733 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-d25k5\"" Apr 23 17:55:20.038921 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.038902 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:20.211582 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.211556 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/28dae391-d055-4f84-b269-f0b74f3ff97c-host-slash\") pod \"iptables-alerter-np628\" (UID: \"28dae391-d055-4f84-b269-f0b74f3ff97c\") " pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.211712 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.211620 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm2mb\" (UniqueName: \"kubernetes.io/projected/28dae391-d055-4f84-b269-f0b74f3ff97c-kube-api-access-xm2mb\") pod \"iptables-alerter-np628\" (UID: \"28dae391-d055-4f84-b269-f0b74f3ff97c\") " pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.211712 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.211679 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/28dae391-d055-4f84-b269-f0b74f3ff97c-iptables-alerter-script\") pod \"iptables-alerter-np628\" (UID: \"28dae391-d055-4f84-b269-f0b74f3ff97c\") " pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.312505 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.312441 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/28dae391-d055-4f84-b269-f0b74f3ff97c-iptables-alerter-script\") pod \"iptables-alerter-np628\" (UID: \"28dae391-d055-4f84-b269-f0b74f3ff97c\") " pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.312505 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.312498 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/28dae391-d055-4f84-b269-f0b74f3ff97c-host-slash\") pod \"iptables-alerter-np628\" (UID: \"28dae391-d055-4f84-b269-f0b74f3ff97c\") " pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.312668 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.312552 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xm2mb\" (UniqueName: \"kubernetes.io/projected/28dae391-d055-4f84-b269-f0b74f3ff97c-kube-api-access-xm2mb\") pod \"iptables-alerter-np628\" (UID: \"28dae391-d055-4f84-b269-f0b74f3ff97c\") " pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.312668 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.312562 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/28dae391-d055-4f84-b269-f0b74f3ff97c-host-slash\") pod \"iptables-alerter-np628\" (UID: \"28dae391-d055-4f84-b269-f0b74f3ff97c\") " pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.324409 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.324383 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/28dae391-d055-4f84-b269-f0b74f3ff97c-iptables-alerter-script\") pod \"iptables-alerter-np628\" (UID: \"28dae391-d055-4f84-b269-f0b74f3ff97c\") " pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.327584 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.327561 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm2mb\" (UniqueName: \"kubernetes.io/projected/28dae391-d055-4f84-b269-f0b74f3ff97c-kube-api-access-xm2mb\") pod \"iptables-alerter-np628\" (UID: \"28dae391-d055-4f84-b269-f0b74f3ff97c\") " pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.347395 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.347335 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-np628" Apr 23 17:55:20.361273 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:55:20.361244 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28dae391_d055_4f84_b269_f0b74f3ff97c.slice/crio-a89b38c05756331d55395895c7b405908a7d742973a29b17d58e58e5d03083f2 WatchSource:0}: Error finding container a89b38c05756331d55395895c7b405908a7d742973a29b17d58e58e5d03083f2: Status 404 returned error can't find the container with id a89b38c05756331d55395895c7b405908a7d742973a29b17d58e58e5d03083f2 Apr 23 17:55:20.413498 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.413463 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsq24\" (UniqueName: \"kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24\") pod \"network-check-target-jhssk\" (UID: \"2618335a-7a85-4193-a71d-15eaab4cb7f1\") " pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:20.413638 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:20.413620 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:20.413698 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:20.413644 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:20.413698 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:20.413659 2570 projected.go:194] Error preparing data for projected volume kube-api-access-gsq24 for pod openshift-network-diagnostics/network-check-target-jhssk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:20.413795 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:20.413713 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24 podName:2618335a-7a85-4193-a71d-15eaab4cb7f1 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:24.41369546 +0000 UTC m=+180.353709591 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gsq24" (UniqueName: "kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24") pod "network-check-target-jhssk" (UID: "2618335a-7a85-4193-a71d-15eaab4cb7f1") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:20.672360 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.672324 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:20.672804 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:20.672477 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:20.970928 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:20.970839 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-np628" event={"ID":"28dae391-d055-4f84-b269-f0b74f3ff97c","Type":"ContainerStarted","Data":"a89b38c05756331d55395895c7b405908a7d742973a29b17d58e58e5d03083f2"} Apr 23 17:55:21.672669 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:21.672634 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:21.673174 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:21.672750 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:21.673174 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:21.672809 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:21.673174 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:21.672929 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:22.675318 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:22.675283 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:22.675700 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:22.675378 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:23.672650 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:23.672617 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:23.675636 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:23.673022 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:23.676077 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:23.675983 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:23.676181 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:23.676154 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:24.435482 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:24.435441 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsq24\" (UniqueName: \"kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24\") pod \"network-check-target-jhssk\" (UID: \"2618335a-7a85-4193-a71d-15eaab4cb7f1\") " pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:24.435668 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:24.435581 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:24.435668 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:24.435596 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:24.435668 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:24.435605 2570 projected.go:194] Error preparing data for projected volume kube-api-access-gsq24 for pod openshift-network-diagnostics/network-check-target-jhssk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:24.435668 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:24.435664 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24 podName:2618335a-7a85-4193-a71d-15eaab4cb7f1 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:32.435645649 +0000 UTC m=+188.375659768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gsq24" (UniqueName: "kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24") pod "network-check-target-jhssk" (UID: "2618335a-7a85-4193-a71d-15eaab4cb7f1") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:24.631850 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:24.631809 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:24.673208 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:24.673181 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:24.673339 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:24.673286 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:25.672214 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:25.672147 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:25.672895 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:25.672154 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:25.672895 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:25.672334 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:25.672895 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:25.672365 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:26.675219 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:26.675193 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:26.675639 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:26.675301 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:27.672380 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:27.672351 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:27.672553 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:27.672351 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:27.672553 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:27.672488 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:27.672655 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:27.672580 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:28.672120 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:28.671969 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:28.672600 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:28.672197 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:28.986751 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:28.986694 2570 generic.go:358] "Generic (PLEG): container finished" podID="199c71c7-b484-42bc-b6a9-7f390ddb6768" containerID="24ee60ef7f1180795b4a7c2b22d9961dedbf43dee44d0ea78808aa6936ce9ab1" exitCode=0 Apr 23 17:55:28.986850 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:28.986744 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6774" event={"ID":"199c71c7-b484-42bc-b6a9-7f390ddb6768","Type":"ContainerDied","Data":"24ee60ef7f1180795b4a7c2b22d9961dedbf43dee44d0ea78808aa6936ce9ab1"} Apr 23 17:55:28.989171 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:28.989151 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerStarted","Data":"203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031"} Apr 23 17:55:28.989261 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:28.989175 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerStarted","Data":"046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318"} Apr 23 17:55:28.989261 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:28.989186 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerStarted","Data":"b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630"} Apr 23 17:55:28.989261 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:28.989194 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerStarted","Data":"0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a"} Apr 23 17:55:28.989261 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:28.989203 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerStarted","Data":"02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e"} Apr 23 17:55:28.989261 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:28.989213 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerStarted","Data":"826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07"} Apr 23 17:55:29.632773 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:29.632738 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:29.672301 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:29.672268 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:29.672301 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:29.672285 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:29.672670 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:29.672378 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:29.672670 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:29.672466 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:29.993869 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:29.993840 2570 generic.go:358] "Generic (PLEG): container finished" podID="199c71c7-b484-42bc-b6a9-7f390ddb6768" containerID="82da58090267d9c1a855b728887f4105a5c4cf6b51332601013aa943f1f05a57" exitCode=0 Apr 23 17:55:29.993984 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:29.993932 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6774" event={"ID":"199c71c7-b484-42bc-b6a9-7f390ddb6768","Type":"ContainerDied","Data":"82da58090267d9c1a855b728887f4105a5c4cf6b51332601013aa943f1f05a57"} Apr 23 17:55:29.995457 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:29.995408 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-np628" event={"ID":"28dae391-d055-4f84-b269-f0b74f3ff97c","Type":"ContainerStarted","Data":"699dfe3b44c421b28eaed63ba9576c8070845770464da05f4a2a0020e60debc3"} Apr 23 17:55:30.025725 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:30.025676 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-np628" podStartSLOduration=2.936304856 podStartE2EDuration="11.025663535s" podCreationTimestamp="2026-04-23 17:55:19 +0000 UTC" firstStartedPulling="2026-04-23 17:55:20.370865555 +0000 UTC m=+176.310879672" lastFinishedPulling="2026-04-23 17:55:28.46022423 +0000 UTC m=+184.400238351" observedRunningTime="2026-04-23 17:55:30.025556108 +0000 UTC m=+185.965570247" watchObservedRunningTime="2026-04-23 17:55:30.025663535 +0000 UTC m=+185.965677659" Apr 23 17:55:30.672470 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:30.672441 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:30.672820 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:30.672546 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:31.000171 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:31.000113 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w6774" event={"ID":"199c71c7-b484-42bc-b6a9-7f390ddb6768","Type":"ContainerStarted","Data":"c78b766cb8125e53e6056d2a894a3d74d02996eaacafe31246932a4d42428399"} Apr 23 17:55:31.002645 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:31.002616 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerStarted","Data":"6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537"} Apr 23 17:55:31.026212 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:31.026176 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-w6774" podStartSLOduration=3.119559434 podStartE2EDuration="28.026164935s" podCreationTimestamp="2026-04-23 17:55:03 +0000 UTC" firstStartedPulling="2026-04-23 17:55:03.33934487 +0000 UTC m=+159.279358989" lastFinishedPulling="2026-04-23 17:55:28.245950365 +0000 UTC m=+184.185964490" observedRunningTime="2026-04-23 17:55:31.025813722 +0000 UTC m=+186.965827862" watchObservedRunningTime="2026-04-23 17:55:31.026164935 +0000 UTC m=+186.966179074" Apr 23 17:55:31.672267 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:31.672116 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:31.672402 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:31.672175 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:31.672402 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:31.672346 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:31.672759 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:31.672486 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:32.488971 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:32.488945 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsq24\" (UniqueName: \"kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24\") pod \"network-check-target-jhssk\" (UID: \"2618335a-7a85-4193-a71d-15eaab4cb7f1\") " pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:32.489127 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:32.489081 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:32.489127 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:32.489098 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:32.489127 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:32.489108 2570 projected.go:194] Error preparing data for projected volume kube-api-access-gsq24 for pod openshift-network-diagnostics/network-check-target-jhssk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:32.489239 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:32.489161 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24 podName:2618335a-7a85-4193-a71d-15eaab4cb7f1 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:48.489142592 +0000 UTC m=+204.429156712 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gsq24" (UniqueName: "kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24") pod "network-check-target-jhssk" (UID: "2618335a-7a85-4193-a71d-15eaab4cb7f1") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:32.672204 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:32.672175 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:32.672312 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:32.672259 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:33.671985 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:33.671943 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:33.672771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:33.671943 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:33.672771 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:33.672059 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:33.672771 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:33.672158 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:34.010503 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:34.010437 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerStarted","Data":"5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31"} Apr 23 17:55:34.010749 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:34.010719 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:34.024911 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:34.024886 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:34.034007 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:34.033973 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" podStartSLOduration=6.746429732 podStartE2EDuration="21.033958283s" podCreationTimestamp="2026-04-23 17:55:13 +0000 UTC" firstStartedPulling="2026-04-23 17:55:13.938380838 +0000 UTC m=+169.878394962" lastFinishedPulling="2026-04-23 17:55:28.225909392 +0000 UTC m=+184.165923513" observedRunningTime="2026-04-23 17:55:34.033448215 +0000 UTC m=+189.973462354" watchObservedRunningTime="2026-04-23 17:55:34.033958283 +0000 UTC m=+189.973972425" Apr 23 17:55:34.634182 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:34.634010 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:34.673320 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:34.673291 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:34.673740 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:34.673396 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:34.923790 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:34.923760 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-mlz2c"] Apr 23 17:55:34.923932 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:34.923889 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:34.923999 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:34.923979 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:34.924356 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:34.924330 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-757d7"] Apr 23 17:55:34.924462 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:34.924437 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:34.924535 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:34.924519 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:34.924845 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:34.924827 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-jhssk"] Apr 23 17:55:35.012560 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:35.012540 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:35.012654 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:35.012619 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:35.013067 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:35.013051 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:35.013127 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:35.013080 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:35.028361 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:35.028334 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:35.708281 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:35.708249 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:35.708658 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:35.708349 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:35.708658 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:35.708399 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs podName:c339fa02-2445-42c2-b7ee-5388fb338129 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:07.708385274 +0000 UTC m=+223.648399393 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs") pod "network-metrics-daemon-757d7" (UID: "c339fa02-2445-42c2-b7ee-5388fb338129") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:36.671924 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:36.671890 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:36.672095 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:36.671900 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:36.672095 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:36.672004 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:36.672095 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:36.672027 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:36.672244 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:36.672128 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:36.672295 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:36.672265 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:38.472111 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.472080 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6k42s"] Apr 23 17:55:38.472697 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.472585 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovn-controller" containerID="cri-o://826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07" gracePeriod=30 Apr 23 17:55:38.472697 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.472650 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="northd" containerID="cri-o://046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318" gracePeriod=30 Apr 23 17:55:38.472904 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.472694 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovn-acl-logging" containerID="cri-o://02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e" gracePeriod=30 Apr 23 17:55:38.472904 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.472686 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="kube-rbac-proxy-node" containerID="cri-o://0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a" gracePeriod=30 Apr 23 17:55:38.472904 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.472662 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630" gracePeriod=30 Apr 23 17:55:38.472904 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.472672 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="sbdb" containerID="cri-o://6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537" gracePeriod=30 Apr 23 17:55:38.472904 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.472584 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="nbdb" containerID="cri-o://203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031" gracePeriod=30 Apr 23 17:55:38.488057 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.487740 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovnkube-controller" probeResult="failure" output="" Apr 23 17:55:38.488197 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.488171 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovnkube-controller" containerID="cri-o://5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31" gracePeriod=30 Apr 23 17:55:38.672197 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.672163 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:38.672313 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.672163 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:38.672313 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:38.672295 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:38.672436 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:38.672165 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:38.672436 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:38.672366 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:38.672527 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:38.672459 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:39.022747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.022724 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 17:55:39.023168 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.023147 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/kube-rbac-proxy-node/0.log" Apr 23 17:55:39.023584 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.023571 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/ovn-acl-logging/0.log" Apr 23 17:55:39.023987 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.023971 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/ovn-controller/0.log" Apr 23 17:55:39.024054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024008 2570 generic.go:358] "Generic (PLEG): container finished" podID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerID="6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537" exitCode=0 Apr 23 17:55:39.024054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024024 2570 generic.go:358] "Generic (PLEG): container finished" podID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerID="203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031" exitCode=0 Apr 23 17:55:39.024054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024031 2570 generic.go:358] "Generic (PLEG): container finished" podID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerID="046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318" exitCode=0 Apr 23 17:55:39.024054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024036 2570 generic.go:358] "Generic (PLEG): container finished" podID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerID="b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630" exitCode=143 Apr 23 17:55:39.024054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024042 2570 generic.go:358] "Generic (PLEG): container finished" podID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerID="0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a" exitCode=143 Apr 23 17:55:39.024054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024048 2570 generic.go:358] "Generic (PLEG): container finished" podID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerID="02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e" exitCode=143 Apr 23 17:55:39.024054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024053 2570 generic.go:358] "Generic (PLEG): container finished" podID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerID="826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07" exitCode=143 Apr 23 17:55:39.024251 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024080 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerDied","Data":"6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537"} Apr 23 17:55:39.024251 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024109 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerDied","Data":"203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031"} Apr 23 17:55:39.024251 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024118 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerDied","Data":"046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318"} Apr 23 17:55:39.024251 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024129 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerDied","Data":"b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630"} Apr 23 17:55:39.024251 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024138 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerDied","Data":"0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a"} Apr 23 17:55:39.024251 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024148 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerDied","Data":"02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e"} Apr 23 17:55:39.024251 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.024156 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerDied","Data":"826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07"} Apr 23 17:55:39.521711 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.521690 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/ovnkube-controller/0.log" Apr 23 17:55:39.522813 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.522797 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 17:55:39.523168 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.523154 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/kube-rbac-proxy-node/0.log" Apr 23 17:55:39.523515 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.523498 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/ovn-acl-logging/0.log" Apr 23 17:55:39.523879 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.523867 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/ovn-controller/0.log" Apr 23 17:55:39.523998 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.523986 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:39.583204 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583183 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8tv94"] Apr 23 17:55:39.583319 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583309 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovn-controller" Apr 23 17:55:39.583362 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583321 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovn-controller" Apr 23 17:55:39.583362 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583329 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovn-acl-logging" Apr 23 17:55:39.583362 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583334 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovn-acl-logging" Apr 23 17:55:39.583362 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583341 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="kube-rbac-proxy-ovn-metrics" Apr 23 17:55:39.583362 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583347 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="kube-rbac-proxy-ovn-metrics" Apr 23 17:55:39.583362 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583353 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="sbdb" Apr 23 17:55:39.583362 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583358 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="sbdb" Apr 23 17:55:39.583362 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583365 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovnkube-controller" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583370 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovnkube-controller" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583376 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="kube-rbac-proxy-node" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583381 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="kube-rbac-proxy-node" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583387 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="northd" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583394 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="northd" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583399 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="nbdb" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583404 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="nbdb" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583442 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="sbdb" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583449 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovnkube-controller" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583455 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="northd" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583461 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="kube-rbac-proxy-ovn-metrics" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583467 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="nbdb" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583473 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovn-acl-logging" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583478 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="kube-rbac-proxy-node" Apr 23 17:55:39.583621 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.583483 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerName="ovn-controller" Apr 23 17:55:39.588368 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.588353 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.629335 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629315 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-netns\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.629439 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629353 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-ovn-kubernetes\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.629439 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629385 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-env-overrides\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.629439 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629430 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-log-socket\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629432 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629455 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-config\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629432 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629478 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-systemd\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629500 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-netd\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629507 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-log-socket" (OuterVolumeSpecName: "log-socket") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629530 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-etc-openvswitch\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629554 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-node-log\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629556 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629576 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-systemd-units\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.629588 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629584 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629603 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l47cj\" (UniqueName: \"kubernetes.io/projected/e3ff089f-6e59-4e40-9963-8d8eee22970b-kube-api-access-l47cj\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629613 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-node-log" (OuterVolumeSpecName: "node-log") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629636 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovn-node-metrics-cert\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629660 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-slash\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629692 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-var-lib-openvswitch\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629720 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-script-lib\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629637 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629744 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-bin\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629670 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629712 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629737 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-slash" (OuterVolumeSpecName: "host-slash") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629774 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-openvswitch\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629800 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629821 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-kubelet\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629836 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629844 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-ovn\") pod \"e3ff089f-6e59-4e40-9963-8d8eee22970b\" (UID: \"e3ff089f-6e59-4e40-9963-8d8eee22970b\") " Apr 23 17:55:39.630060 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629875 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629902 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629915 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-cni-bin\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629930 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629944 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03c738ad-ed27-446e-9deb-dd610cedd26f-ovnkube-script-lib\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629955 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629967 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.629963 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630030 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-run-netns\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630068 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmh5\" (UniqueName: \"kubernetes.io/projected/03c738ad-ed27-446e-9deb-dd610cedd26f-kube-api-access-dcmh5\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630123 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-slash\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630155 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-run-systemd\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630185 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-run-ovn\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630206 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-log-socket\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630231 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03c738ad-ed27-446e-9deb-dd610cedd26f-env-overrides\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630249 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-run-ovn-kubernetes\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.630747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630287 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-kubelet\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630345 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-run-openvswitch\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630376 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-var-lib-openvswitch\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630399 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03c738ad-ed27-446e-9deb-dd610cedd26f-ovn-node-metrics-cert\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630444 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-cni-netd\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630491 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-systemd-units\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630513 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-etc-openvswitch\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630531 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-node-log\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630552 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03c738ad-ed27-446e-9deb-dd610cedd26f-ovnkube-config\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630602 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630655 2570 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-config\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630671 2570 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-netd\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630682 2570 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-etc-openvswitch\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630692 2570 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-node-log\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630701 2570 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-systemd-units\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630709 2570 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-slash\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630719 2570 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-var-lib-openvswitch\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630729 2570 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovnkube-script-lib\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630742 2570 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-cni-bin\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.631540 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630751 2570 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-openvswitch\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.632470 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630761 2570 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.632470 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630770 2570 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-kubelet\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.632470 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630779 2570 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-ovn\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.632470 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630787 2570 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-netns\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.632470 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630796 2570 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-host-run-ovn-kubernetes\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.632470 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630807 2570 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3ff089f-6e59-4e40-9963-8d8eee22970b-env-overrides\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.632470 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.630818 2570 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-log-socket\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.633269 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.633176 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ff089f-6e59-4e40-9963-8d8eee22970b-kube-api-access-l47cj" (OuterVolumeSpecName: "kube-api-access-l47cj") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "kube-api-access-l47cj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:55:39.633348 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.633295 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:55:39.633956 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.633939 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e3ff089f-6e59-4e40-9963-8d8eee22970b" (UID: "e3ff089f-6e59-4e40-9963-8d8eee22970b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:55:39.635254 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:39.635230 2570 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:39.732001 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.731959 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-run-netns\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732001 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.731986 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dcmh5\" (UniqueName: \"kubernetes.io/projected/03c738ad-ed27-446e-9deb-dd610cedd26f-kube-api-access-dcmh5\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732012 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-slash\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732028 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-run-systemd\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732051 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-run-ovn\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732072 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-log-socket\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732094 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-run-netns\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732103 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-slash\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732097 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03c738ad-ed27-446e-9deb-dd610cedd26f-env-overrides\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732120 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-run-systemd\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732126 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-run-ovn\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732144 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-log-socket\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732156 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-run-ovn-kubernetes\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732187 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-kubelet\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732209 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-run-openvswitch\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732232 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-kubelet\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732236 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-var-lib-openvswitch\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732261 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03c738ad-ed27-446e-9deb-dd610cedd26f-ovn-node-metrics-cert\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732262 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-run-openvswitch\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732268 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-var-lib-openvswitch\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732234 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-run-ovn-kubernetes\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732364 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-cni-netd\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732454 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-cni-netd\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732471 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-systemd-units\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732501 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-etc-openvswitch\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732525 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-node-log\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732552 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-systemd-units\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.732566 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732555 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-etc-openvswitch\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732556 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03c738ad-ed27-446e-9deb-dd610cedd26f-ovnkube-config\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732580 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-node-log\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732601 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732624 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-cni-bin\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732646 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03c738ad-ed27-446e-9deb-dd610cedd26f-ovnkube-script-lib\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732665 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732681 2570 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3ff089f-6e59-4e40-9963-8d8eee22970b-run-systemd\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732692 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03c738ad-ed27-446e-9deb-dd610cedd26f-env-overrides\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732697 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l47cj\" (UniqueName: \"kubernetes.io/projected/e3ff089f-6e59-4e40-9963-8d8eee22970b-kube-api-access-l47cj\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732699 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03c738ad-ed27-446e-9deb-dd610cedd26f-host-cni-bin\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.732712 2570 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3ff089f-6e59-4e40-9963-8d8eee22970b-ovn-node-metrics-cert\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.733026 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03c738ad-ed27-446e-9deb-dd610cedd26f-ovnkube-config\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.733231 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.733043 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03c738ad-ed27-446e-9deb-dd610cedd26f-ovnkube-script-lib\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.734175 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.734158 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03c738ad-ed27-446e-9deb-dd610cedd26f-ovn-node-metrics-cert\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.740967 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.740952 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcmh5\" (UniqueName: \"kubernetes.io/projected/03c738ad-ed27-446e-9deb-dd610cedd26f-kube-api-access-dcmh5\") pod \"ovnkube-node-8tv94\" (UID: \"03c738ad-ed27-446e-9deb-dd610cedd26f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:39.896427 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:39.896389 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:40.026646 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.026623 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" event={"ID":"03c738ad-ed27-446e-9deb-dd610cedd26f","Type":"ContainerStarted","Data":"2cdf329d49c6e047fd116b6e1781988586cab583a8a196dc0ee1d73eb277bb1a"} Apr 23 17:55:40.026750 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.026652 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" event={"ID":"03c738ad-ed27-446e-9deb-dd610cedd26f","Type":"ContainerStarted","Data":"43d2edddfe2aa214612c82b967deb756ebad1acce350b96272d0903c865e80a5"} Apr 23 17:55:40.026750 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.026674 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" event={"ID":"03c738ad-ed27-446e-9deb-dd610cedd26f","Type":"ContainerStarted","Data":"8952859bb119c644a534f3178397cfe697b65b24a7490921dcb1b0f461b8fc86"} Apr 23 17:55:40.027732 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.027717 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/ovnkube-controller/0.log" Apr 23 17:55:40.028832 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.028817 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 17:55:40.029155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.029143 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/kube-rbac-proxy-node/0.log" Apr 23 17:55:40.029467 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.029454 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/ovn-acl-logging/0.log" Apr 23 17:55:40.029807 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.029792 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6k42s_e3ff089f-6e59-4e40-9963-8d8eee22970b/ovn-controller/0.log" Apr 23 17:55:40.029867 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.029831 2570 generic.go:358] "Generic (PLEG): container finished" podID="e3ff089f-6e59-4e40-9963-8d8eee22970b" containerID="5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31" exitCode=1 Apr 23 17:55:40.029927 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.029870 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerDied","Data":"5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31"} Apr 23 17:55:40.029927 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.029896 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" event={"ID":"e3ff089f-6e59-4e40-9963-8d8eee22970b","Type":"ContainerDied","Data":"077afafb9e40fd040af29d46e048f5cc8bf9efbcd6d83af0231c17e52b5c796e"} Apr 23 17:55:40.029927 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.029918 2570 scope.go:117] "RemoveContainer" containerID="5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31" Apr 23 17:55:40.029927 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.029925 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6k42s" Apr 23 17:55:40.037280 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.037263 2570 scope.go:117] "RemoveContainer" containerID="6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537" Apr 23 17:55:40.045533 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.045504 2570 scope.go:117] "RemoveContainer" containerID="203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031" Apr 23 17:55:40.051437 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.051404 2570 scope.go:117] "RemoveContainer" containerID="046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318" Apr 23 17:55:40.056740 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.056721 2570 scope.go:117] "RemoveContainer" containerID="b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630" Apr 23 17:55:40.060932 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.060914 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6k42s"] Apr 23 17:55:40.063563 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.063539 2570 scope.go:117] "RemoveContainer" containerID="0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a" Apr 23 17:55:40.065569 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.065546 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6k42s"] Apr 23 17:55:40.069138 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.069123 2570 scope.go:117] "RemoveContainer" containerID="02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e" Apr 23 17:55:40.083766 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.083749 2570 scope.go:117] "RemoveContainer" containerID="826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07" Apr 23 17:55:40.098233 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.098216 2570 scope.go:117] "RemoveContainer" containerID="5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31" Apr 23 17:55:40.098499 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.098477 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31\": container with ID starting with 5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31 not found: ID does not exist" containerID="5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31" Apr 23 17:55:40.098563 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.098506 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31"} err="failed to get container status \"5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31\": rpc error: code = NotFound desc = could not find container \"5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31\": container with ID starting with 5364ebe3d141a7524833c7ac857641c34319e151c33ad475f306544912eaca31 not found: ID does not exist" Apr 23 17:55:40.098563 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.098537 2570 scope.go:117] "RemoveContainer" containerID="6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537" Apr 23 17:55:40.098790 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.098773 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537\": container with ID starting with 6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537 not found: ID does not exist" containerID="6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537" Apr 23 17:55:40.098841 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.098793 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537"} err="failed to get container status \"6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537\": rpc error: code = NotFound desc = could not find container \"6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537\": container with ID starting with 6a098597eaa27af5490f3bff0be54436044b82311b0b4cc38750f2850b969537 not found: ID does not exist" Apr 23 17:55:40.098841 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.098805 2570 scope.go:117] "RemoveContainer" containerID="203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031" Apr 23 17:55:40.099033 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.099018 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031\": container with ID starting with 203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031 not found: ID does not exist" containerID="203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031" Apr 23 17:55:40.099073 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.099034 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031"} err="failed to get container status \"203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031\": rpc error: code = NotFound desc = could not find container \"203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031\": container with ID starting with 203e134d4a04dcb3ff2574aa9588adbd4b5646f628773d4ab7e6805d4d9ea031 not found: ID does not exist" Apr 23 17:55:40.099073 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.099046 2570 scope.go:117] "RemoveContainer" containerID="046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318" Apr 23 17:55:40.099266 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.099252 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318\": container with ID starting with 046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318 not found: ID does not exist" containerID="046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318" Apr 23 17:55:40.099314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.099268 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318"} err="failed to get container status \"046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318\": rpc error: code = NotFound desc = could not find container \"046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318\": container with ID starting with 046cae60a8f0044d09861431c72a3f82d39ed94b7af87bfdf84b3b9e4b0ef318 not found: ID does not exist" Apr 23 17:55:40.099314 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.099280 2570 scope.go:117] "RemoveContainer" containerID="b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630" Apr 23 17:55:40.099573 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.099559 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630\": container with ID starting with b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630 not found: ID does not exist" containerID="b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630" Apr 23 17:55:40.099610 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.099577 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630"} err="failed to get container status \"b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630\": rpc error: code = NotFound desc = could not find container \"b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630\": container with ID starting with b4dcd91e7c8d3fbd6d836034d2cdfdf5b7aaa2530c1722bd869737cb92a0a630 not found: ID does not exist" Apr 23 17:55:40.099610 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.099589 2570 scope.go:117] "RemoveContainer" containerID="0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a" Apr 23 17:55:40.099787 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.099770 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a\": container with ID starting with 0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a not found: ID does not exist" containerID="0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a" Apr 23 17:55:40.099844 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.099790 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a"} err="failed to get container status \"0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a\": rpc error: code = NotFound desc = could not find container \"0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a\": container with ID starting with 0acd4ae9451ee3558f0e1b4fdfbf694a74a51a722e69992fdd76ebbe3518d54a not found: ID does not exist" Apr 23 17:55:40.099844 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.099800 2570 scope.go:117] "RemoveContainer" containerID="02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e" Apr 23 17:55:40.099980 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.099967 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e\": container with ID starting with 02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e not found: ID does not exist" containerID="02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e" Apr 23 17:55:40.100019 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.099982 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e"} err="failed to get container status \"02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e\": rpc error: code = NotFound desc = could not find container \"02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e\": container with ID starting with 02b9c93d70dfd4b984280fc31b6c5edcbf137f0cc78b08c2c3a639b982b26a3e not found: ID does not exist" Apr 23 17:55:40.100019 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.099993 2570 scope.go:117] "RemoveContainer" containerID="826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07" Apr 23 17:55:40.100202 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.100186 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07\": container with ID starting with 826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07 not found: ID does not exist" containerID="826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07" Apr 23 17:55:40.100240 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.100204 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07"} err="failed to get container status \"826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07\": rpc error: code = NotFound desc = could not find container \"826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07\": container with ID starting with 826c9d9897f22ce5524a403eaab796f93b8592e6acaf783d8e0dde13595e3d07 not found: ID does not exist" Apr 23 17:55:40.672404 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.672377 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:40.672946 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.672404 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:40.672946 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.672443 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:40.672946 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.672514 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:40.672946 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.672584 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:40.672946 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:40.672662 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:40.675206 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:40.675188 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ff089f-6e59-4e40-9963-8d8eee22970b" path="/var/lib/kubelet/pods/e3ff089f-6e59-4e40-9963-8d8eee22970b/volumes" Apr 23 17:55:41.034590 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:41.034570 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" event={"ID":"03c738ad-ed27-446e-9deb-dd610cedd26f","Type":"ContainerStarted","Data":"5b534009196be3c8f982bc02dd84a9e7daff1696c3f3d046e90cf82b861a1a89"} Apr 23 17:55:41.034704 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:41.034594 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" event={"ID":"03c738ad-ed27-446e-9deb-dd610cedd26f","Type":"ContainerStarted","Data":"616f0ca017cd7ee2fd1aae25dce798c1168277b47c6216f66758b20777be8566"} Apr 23 17:55:41.034704 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:41.034606 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" event={"ID":"03c738ad-ed27-446e-9deb-dd610cedd26f","Type":"ContainerStarted","Data":"65a4028e1614b5e81778493892f8eeaf8f5635d2004329653f023b746068ddd8"} Apr 23 17:55:41.034704 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:41.034618 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" event={"ID":"03c738ad-ed27-446e-9deb-dd610cedd26f","Type":"ContainerStarted","Data":"a1dd647ef2659285718b3750d3d5bb54b6973e255824a2c25940f0d8ce8a7296"} Apr 23 17:55:42.672577 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:42.672546 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:42.672962 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:42.672553 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:42.672962 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:42.672656 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-757d7" podUID="c339fa02-2445-42c2-b7ee-5388fb338129" Apr 23 17:55:42.672962 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:42.672731 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jhssk" podUID="2618335a-7a85-4193-a71d-15eaab4cb7f1" Apr 23 17:55:42.672962 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:42.672775 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:42.672962 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:42.672828 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mlz2c" podUID="90608d2c-b6cd-4fca-968e-9fc7cbf593f8" Apr 23 17:55:43.041768 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:43.041704 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" event={"ID":"03c738ad-ed27-446e-9deb-dd610cedd26f","Type":"ContainerStarted","Data":"c8c13bd59f2c64a55e252787876c9f85f015052fb717539e54688edf22f613ee"} Apr 23 17:55:44.672873 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:44.672847 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:55:44.673238 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:44.673009 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:44.673238 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:44.673107 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:55:44.675832 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:44.675813 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:55:44.675925 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:44.675900 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:55:44.676120 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:44.676104 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:55:44.676589 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:44.676571 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-s85dw\"" Apr 23 17:55:44.676658 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:44.676637 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:55:44.676718 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:44.676664 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-brp64\"" Apr 23 17:55:45.048504 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:45.048402 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" event={"ID":"03c738ad-ed27-446e-9deb-dd610cedd26f","Type":"ContainerStarted","Data":"e2f81fc9c8d8d336c309149202af4fa63b435c49cdcd40cc2e642d9c92854824"} Apr 23 17:55:45.048813 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:45.048793 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:45.048899 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:45.048819 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:45.063679 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:45.063656 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:45.076930 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:45.076883 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" podStartSLOduration=6.07687221 podStartE2EDuration="6.07687221s" podCreationTimestamp="2026-04-23 17:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:45.076702142 +0000 UTC m=+201.016716301" watchObservedRunningTime="2026-04-23 17:55:45.07687221 +0000 UTC m=+201.016886350" Apr 23 17:55:46.051391 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:46.051363 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:46.064915 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:46.064889 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:55:48.583144 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:48.583111 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsq24\" (UniqueName: \"kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24\") pod \"network-check-target-jhssk\" (UID: \"2618335a-7a85-4193-a71d-15eaab4cb7f1\") " pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:48.586811 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:48.586786 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsq24\" (UniqueName: \"kubernetes.io/projected/2618335a-7a85-4193-a71d-15eaab4cb7f1-kube-api-access-gsq24\") pod \"network-check-target-jhssk\" (UID: \"2618335a-7a85-4193-a71d-15eaab4cb7f1\") " pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:48.586885 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:48.586870 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:48.752898 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:48.752873 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-jhssk"] Apr 23 17:55:48.756151 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:55:48.756126 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2618335a_7a85_4193_a71d_15eaab4cb7f1.slice/crio-8eab7b660312f3226e4fc208c87bf923f5eb302140cb7881239cff9cc31d5537 WatchSource:0}: Error finding container 8eab7b660312f3226e4fc208c87bf923f5eb302140cb7881239cff9cc31d5537: Status 404 returned error can't find the container with id 8eab7b660312f3226e4fc208c87bf923f5eb302140cb7881239cff9cc31d5537 Apr 23 17:55:49.048219 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.048198 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-209.ec2.internal" event="NodeReady" Apr 23 17:55:49.057392 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.057356 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-jhssk" event={"ID":"2618335a-7a85-4193-a71d-15eaab4cb7f1","Type":"ContainerStarted","Data":"8eab7b660312f3226e4fc208c87bf923f5eb302140cb7881239cff9cc31d5537"} Apr 23 17:55:49.084616 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.084581 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk"] Apr 23 17:55:49.110296 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.110275 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh"] Apr 23 17:55:49.110480 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.110458 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.113333 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.113138 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Apr 23 17:55:49.113333 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.113220 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Apr 23 17:55:49.113823 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.113629 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-h4b2l\"" Apr 23 17:55:49.113823 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.113693 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Apr 23 17:55:49.113990 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.113936 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:49.121014 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.120993 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt"] Apr 23 17:55:49.121154 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.121128 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:49.123736 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.123714 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-dockercfg-xk9kj\"" Apr 23 17:55:49.124189 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.124019 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-tls\"" Apr 23 17:55:49.124189 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.124125 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 23 17:55:49.125024 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.125001 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemetry-config\"" Apr 23 17:55:49.125705 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.125682 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 23 17:55:49.133481 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.133459 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-dzhrv"] Apr 23 17:55:49.133634 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.133614 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:55:49.141857 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.141760 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Apr 23 17:55:49.141970 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.141797 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:49.142025 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.142002 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Apr 23 17:55:49.142171 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.142152 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-6lqdk\"" Apr 23 17:55:49.146520 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.146501 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-64b76f7768-4tktg"] Apr 23 17:55:49.146660 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.146643 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.148592 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.148570 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-dktcp\"" Apr 23 17:55:49.148836 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.148820 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Apr 23 17:55:49.148918 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.148862 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Apr 23 17:55:49.149134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.149113 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Apr 23 17:55:49.149241 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.149159 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:49.155778 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.155758 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Apr 23 17:55:49.160562 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.160544 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp"] Apr 23 17:55:49.160719 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.160694 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.163062 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.163045 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Apr 23 17:55:49.163355 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.163340 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-f2pzm\"" Apr 23 17:55:49.163404 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.163347 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"default-ingress-cert\"" Apr 23 17:55:49.163404 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.163384 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Apr 23 17:55:49.163646 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.163630 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Apr 23 17:55:49.163711 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.163665 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Apr 23 17:55:49.163855 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.163840 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Apr 23 17:55:49.175813 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.175797 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg"] Apr 23 17:55:49.175892 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.175882 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp" Apr 23 17:55:49.178037 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.178020 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:49.178145 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.178075 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-storage-operator\"/\"volume-data-source-validator-dockercfg-h6pwb\"" Apr 23 17:55:49.178145 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.178090 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\"" Apr 23 17:55:49.187462 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187443 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7pbm\" (UniqueName: \"kubernetes.io/projected/d39ff941-5050-456f-862b-ec6962d9c97c-kube-api-access-l7pbm\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:55:49.187565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187476 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wdfl\" (UniqueName: \"kubernetes.io/projected/f9384d4c-b454-4102-b989-7bd167cee9f4-kube-api-access-7wdfl\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.187565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187510 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f006f82-b551-4a19-b684-091814d45d54-serving-cert\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.187565 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187536 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.187711 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187565 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.187711 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187591 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:55:49.187711 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187623 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f006f82-b551-4a19-b684-091814d45d54-config\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.187711 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187653 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d78b9e26-7f79-4f68-ad87-4e237752c338-serving-cert\") pod \"service-ca-operator-d6fc45fc5-9t7hk\" (UID: \"d78b9e26-7f79-4f68-ad87-4e237752c338\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.187711 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187694 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqg6r\" (UniqueName: \"kubernetes.io/projected/d78b9e26-7f79-4f68-ad87-4e237752c338-kube-api-access-xqg6r\") pod \"service-ca-operator-d6fc45fc5-9t7hk\" (UID: \"d78b9e26-7f79-4f68-ad87-4e237752c338\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.187885 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187720 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-default-certificate\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.187885 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187749 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f006f82-b551-4a19-b684-091814d45d54-trusted-ca\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.187885 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187768 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2rq\" (UniqueName: \"kubernetes.io/projected/3f006f82-b551-4a19-b684-091814d45d54-kube-api-access-xg2rq\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.187885 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187790 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dcab8057-5f29-4508-9628-e8ee8882286b-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:49.187885 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187824 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78b9e26-7f79-4f68-ad87-4e237752c338-config\") pod \"service-ca-operator-d6fc45fc5-9t7hk\" (UID: \"d78b9e26-7f79-4f68-ad87-4e237752c338\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.187885 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187863 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-stats-auth\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.188079 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187885 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:49.188079 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.187914 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz9db\" (UniqueName: \"kubernetes.io/projected/dcab8057-5f29-4508-9628-e8ee8882286b-kube-api-access-dz9db\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:49.193304 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.193287 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-585dfdc468-hd2z2"] Apr 23 17:55:49.193359 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.193316 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.195660 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.195644 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:49.195753 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.195672 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-7vbf9\"" Apr 23 17:55:49.195810 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.195758 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Apr 23 17:55:49.195994 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.195976 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Apr 23 17:55:49.196087 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.195979 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Apr 23 17:55:49.201136 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.201121 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-69557c4bff-g4kkh"] Apr 23 17:55:49.201263 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.201247 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.203065 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.203044 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"openshift-insights-serving-cert\"" Apr 23 17:55:49.203155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.203050 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 23 17:55:49.203155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.203080 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"operator-dockercfg-pcvh7\"" Apr 23 17:55:49.203155 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.203127 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 23 17:55:49.203311 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.203196 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"service-ca-bundle\"" Apr 23 17:55:49.209896 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.209874 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"trusted-ca-bundle\"" Apr 23 17:55:49.211218 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211202 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk"] Apr 23 17:55:49.211280 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211223 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh"] Apr 23 17:55:49.211280 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211230 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-dzhrv"] Apr 23 17:55:49.211280 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211239 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg"] Apr 23 17:55:49.211280 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211250 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-585dfdc468-hd2z2"] Apr 23 17:55:49.211280 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211260 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp"] Apr 23 17:55:49.211280 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211272 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-64b76f7768-4tktg"] Apr 23 17:55:49.211519 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211283 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt"] Apr 23 17:55:49.211519 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211294 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-69557c4bff-g4kkh"] Apr 23 17:55:49.211519 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211307 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v"] Apr 23 17:55:49.211519 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.211323 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.212966 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.212950 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 23 17:55:49.213098 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.213082 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 23 17:55:49.213165 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.213127 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-f7p8x\"" Apr 23 17:55:49.213165 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.213151 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 23 17:55:49.218510 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.218491 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-c94n2"] Apr 23 17:55:49.218994 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.218962 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:55:49.222533 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.222510 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Apr 23 17:55:49.222629 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.222601 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 23 17:55:49.223056 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.223038 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Apr 23 17:55:49.223224 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.223104 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-gqj2q\"" Apr 23 17:55:49.232967 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.232949 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q"] Apr 23 17:55:49.233296 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.233276 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:55:49.235207 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.235187 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-wh8vz\"" Apr 23 17:55:49.235325 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.235207 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 23 17:55:49.235599 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.235582 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 23 17:55:49.235669 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.235639 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 23 17:55:49.248595 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.248575 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v"] Apr 23 17:55:49.248595 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.248598 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-c94n2"] Apr 23 17:55:49.248727 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.248610 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q"] Apr 23 17:55:49.248727 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.248627 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l82lj"] Apr 23 17:55:49.248727 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.248705 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q" Apr 23 17:55:49.250384 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.250364 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"network-diagnostics-dockercfg-jsb7w\"" Apr 23 17:55:49.260500 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.260482 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mfdbd"] Apr 23 17:55:49.260615 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.260597 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.262368 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.262345 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Apr 23 17:55:49.273319 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.273301 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mfdbd"] Apr 23 17:55:49.273433 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.273385 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.278287 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.278269 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 23 17:55:49.278562 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.278545 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 23 17:55:49.278650 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.278566 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 23 17:55:49.278650 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.278595 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 23 17:55:49.278769 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.278685 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-4gw5f\"" Apr 23 17:55:49.288899 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.288851 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dz9db\" (UniqueName: \"kubernetes.io/projected/dcab8057-5f29-4508-9628-e8ee8882286b-kube-api-access-dz9db\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:49.288899 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.288892 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-trusted-ca\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.289048 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.288916 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/daa6f738-c430-4a35-9826-ea29f862b6fe-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.289048 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.288945 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7pbm\" (UniqueName: \"kubernetes.io/projected/d39ff941-5050-456f-862b-ec6962d9c97c-kube-api-access-l7pbm\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:55:49.289048 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.288972 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:55:49.289048 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.288999 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-bound-sa-token\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.289048 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289034 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f006f82-b551-4a19-b684-091814d45d54-serving-cert\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.289293 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289080 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.289293 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289119 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.289293 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289150 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-snapshots\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.289293 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289175 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f006f82-b551-4a19-b684-091814d45d54-trusted-ca\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.289293 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.289212 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:49.789195738 +0000 UTC m=+205.729209867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : configmap references non-existent config key: service-ca.crt Apr 23 17:55:49.289293 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289232 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d78b9e26-7f79-4f68-ad87-4e237752c338-serving-cert\") pod \"service-ca-operator-d6fc45fc5-9t7hk\" (UID: \"d78b9e26-7f79-4f68-ad87-4e237752c338\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.289293 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289259 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xqg6r\" (UniqueName: \"kubernetes.io/projected/d78b9e26-7f79-4f68-ad87-4e237752c338-kube-api-access-xqg6r\") pod \"service-ca-operator-d6fc45fc5-9t7hk\" (UID: \"d78b9e26-7f79-4f68-ad87-4e237752c338\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.289293 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289279 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2rq\" (UniqueName: \"kubernetes.io/projected/3f006f82-b551-4a19-b684-091814d45d54-kube-api-access-xg2rq\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.289293 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289299 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.289742 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289318 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp77k\" (UniqueName: \"kubernetes.io/projected/25d8ee46-11e2-460e-9417-93a2def9b519-kube-api-access-vp77k\") pod \"kube-storage-version-migrator-operator-6769c5d45-vtfgg\" (UID: \"25d8ee46-11e2-460e-9417-93a2def9b519\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.289742 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289335 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4w5z\" (UniqueName: \"kubernetes.io/projected/b7a4b38b-7cea-453f-b021-f118b3260fb4-kube-api-access-k4w5z\") pod \"network-check-source-8894fc9bd-b5c9q\" (UID: \"b7a4b38b-7cea-453f-b021-f118b3260fb4\") " pod="openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q" Apr 23 17:55:49.289742 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289359 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-service-ca-bundle\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.289742 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289452 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-certificates\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.289742 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289484 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tt5q\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-kube-api-access-7tt5q\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.289742 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289518 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25d8ee46-11e2-460e-9417-93a2def9b519-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-vtfgg\" (UID: \"25d8ee46-11e2-460e-9417-93a2def9b519\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.289742 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.289633 2570 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:55:49.290104 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.289908 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-stats-auth\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.290158 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.290115 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f006f82-b551-4a19-b684-091814d45d54-trusted-ca\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.290284 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.290262 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:49.790240216 +0000 UTC m=+205.730254347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : secret "router-metrics-certs-default" not found Apr 23 17:55:49.290398 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.290339 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:49.290398 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.290379 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-image-registry-private-configuration\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.290636 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.290618 2570 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:49.290749 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.290644 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37d5c9de-eb28-492f-8280-8ed7c5e432be-ca-trust-extracted\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.290899 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.290880 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrwm4\" (UniqueName: \"kubernetes.io/projected/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-kube-api-access-zrwm4\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.291008 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.290994 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls podName:dcab8057-5f29-4508-9628-e8ee8882286b nodeName:}" failed. No retries permitted until 2026-04-23 17:55:49.790968845 +0000 UTC m=+205.730982979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-7hdwh" (UID: "dcab8057-5f29-4508-9628-e8ee8882286b") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:49.291154 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291125 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7wdfl\" (UniqueName: \"kubernetes.io/projected/f9384d4c-b454-4102-b989-7bd167cee9f4-kube-api-access-7wdfl\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.291243 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291165 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-installation-pull-secrets\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.291243 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291193 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/daa6f738-c430-4a35-9826-ea29f862b6fe-ready\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.291243 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291220 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291245 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hv6h\" (UniqueName: \"kubernetes.io/projected/0da25b59-22b1-4319-b795-3e7d2bc7db04-kube-api-access-6hv6h\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291270 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291312 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwv4\" (UniqueName: \"kubernetes.io/projected/daa6f738-c430-4a35-9826-ea29f862b6fe-kube-api-access-jpwv4\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291342 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291379 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f006f82-b551-4a19-b684-091814d45d54-config\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.291444 2570 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291472 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.291490 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls podName:d39ff941-5050-456f-862b-ec6962d9c97c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:49.791475064 +0000 UTC m=+205.731489182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-w65xt" (UID: "d39ff941-5050-456f-862b-ec6962d9c97c") : secret "samples-operator-tls" not found Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291556 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-default-certificate\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291589 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/daa6f738-c430-4a35-9826-ea29f862b6fe-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291615 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-tmp\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291646 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dcab8057-5f29-4508-9628-e8ee8882286b-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291671 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-serving-cert\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291698 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78b9e26-7f79-4f68-ad87-4e237752c338-config\") pod \"service-ca-operator-d6fc45fc5-9t7hk\" (UID: \"d78b9e26-7f79-4f68-ad87-4e237752c338\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.292170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291725 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nzxn\" (UniqueName: \"kubernetes.io/projected/128cf630-366d-4caf-a72e-dec2e74c92ba-kube-api-access-9nzxn\") pod \"volume-data-source-validator-7c6cbb6c87-2t8xp\" (UID: \"128cf630-366d-4caf-a72e-dec2e74c92ba\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp" Apr 23 17:55:49.292980 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.291753 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25d8ee46-11e2-460e-9417-93a2def9b519-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-vtfgg\" (UID: \"25d8ee46-11e2-460e-9417-93a2def9b519\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.292980 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.292305 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f006f82-b551-4a19-b684-091814d45d54-serving-cert\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.292980 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.292375 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78b9e26-7f79-4f68-ad87-4e237752c338-config\") pod \"service-ca-operator-d6fc45fc5-9t7hk\" (UID: \"d78b9e26-7f79-4f68-ad87-4e237752c338\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.292980 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.292567 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dcab8057-5f29-4508-9628-e8ee8882286b-telemetry-config\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:49.292980 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.292656 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-stats-auth\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.293230 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.293104 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f006f82-b551-4a19-b684-091814d45d54-config\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.293481 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.293463 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d78b9e26-7f79-4f68-ad87-4e237752c338-serving-cert\") pod \"service-ca-operator-d6fc45fc5-9t7hk\" (UID: \"d78b9e26-7f79-4f68-ad87-4e237752c338\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.294683 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.294663 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-default-certificate\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.301247 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.301181 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2rq\" (UniqueName: \"kubernetes.io/projected/3f006f82-b551-4a19-b684-091814d45d54-kube-api-access-xg2rq\") pod \"console-operator-9d4b6777b-dzhrv\" (UID: \"3f006f82-b551-4a19-b684-091814d45d54\") " pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.301525 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.301508 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7pbm\" (UniqueName: \"kubernetes.io/projected/d39ff941-5050-456f-862b-ec6962d9c97c-kube-api-access-l7pbm\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:55:49.302771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.302748 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz9db\" (UniqueName: \"kubernetes.io/projected/dcab8057-5f29-4508-9628-e8ee8882286b-kube-api-access-dz9db\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:49.303319 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.303302 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wdfl\" (UniqueName: \"kubernetes.io/projected/f9384d4c-b454-4102-b989-7bd167cee9f4-kube-api-access-7wdfl\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.303459 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.303441 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqg6r\" (UniqueName: \"kubernetes.io/projected/d78b9e26-7f79-4f68-ad87-4e237752c338-kube-api-access-xqg6r\") pod \"service-ca-operator-d6fc45fc5-9t7hk\" (UID: \"d78b9e26-7f79-4f68-ad87-4e237752c338\") " pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.393087 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393063 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-service-ca-bundle\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.393261 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393102 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/912b083f-5a59-4119-91fd-47ba37c5ed53-tmp-dir\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.393261 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393133 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-certificates\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.393261 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393161 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7tt5q\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-kube-api-access-7tt5q\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.393261 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393187 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25d8ee46-11e2-460e-9417-93a2def9b519-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-vtfgg\" (UID: \"25d8ee46-11e2-460e-9417-93a2def9b519\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.393261 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393250 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-image-registry-private-configuration\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.393612 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393278 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37d5c9de-eb28-492f-8280-8ed7c5e432be-ca-trust-extracted\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.393612 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393304 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zrwm4\" (UniqueName: \"kubernetes.io/projected/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-kube-api-access-zrwm4\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.393612 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393336 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-installation-pull-secrets\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.393612 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393361 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/daa6f738-c430-4a35-9826-ea29f862b6fe-ready\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.393612 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393434 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:55:49.393612 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393464 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hv6h\" (UniqueName: \"kubernetes.io/projected/0da25b59-22b1-4319-b795-3e7d2bc7db04-kube-api-access-6hv6h\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:55:49.393612 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393492 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:55:49.393612 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393519 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jpwv4\" (UniqueName: \"kubernetes.io/projected/daa6f738-c430-4a35-9826-ea29f862b6fe-kube-api-access-jpwv4\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.393612 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393562 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgk2w\" (UniqueName: \"kubernetes.io/projected/912b083f-5a59-4119-91fd-47ba37c5ed53-kube-api-access-wgk2w\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.393612 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393590 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393621 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/daa6f738-c430-4a35-9826-ea29f862b6fe-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393646 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393673 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-tmp\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393704 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-serving-cert\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393723 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-service-ca-bundle\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393732 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9nzxn\" (UniqueName: \"kubernetes.io/projected/128cf630-366d-4caf-a72e-dec2e74c92ba-kube-api-access-9nzxn\") pod \"volume-data-source-validator-7c6cbb6c87-2t8xp\" (UID: \"128cf630-366d-4caf-a72e-dec2e74c92ba\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393748 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-certificates\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393757 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25d8ee46-11e2-460e-9417-93a2def9b519-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-vtfgg\" (UID: \"25d8ee46-11e2-460e-9417-93a2def9b519\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393798 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-trusted-ca\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393823 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/daa6f738-c430-4a35-9826-ea29f862b6fe-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.393847 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.393894 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert podName:0da25b59-22b1-4319-b795-3e7d2bc7db04 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:49.893878275 +0000 UTC m=+205.833892412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert") pod "ingress-canary-c94n2" (UID: "0da25b59-22b1-4319-b795-3e7d2bc7db04") : secret "canary-serving-cert" not found Apr 23 17:55:49.394108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.393891 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/daa6f738-c430-4a35-9826-ea29f862b6fe-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394238 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37d5c9de-eb28-492f-8280-8ed7c5e432be-ca-trust-extracted\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394328 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/daa6f738-c430-4a35-9826-ea29f862b6fe-ready\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394378 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394434 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/912b083f-5a59-4119-91fd-47ba37c5ed53-config-volume\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394468 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-bound-sa-token\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394518 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25d8ee46-11e2-460e-9417-93a2def9b519-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-vtfgg\" (UID: \"25d8ee46-11e2-460e-9417-93a2def9b519\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394546 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-snapshots\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.394572 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394589 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394609 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/daa6f738-c430-4a35-9826-ea29f862b6fe-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394628 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vp77k\" (UniqueName: \"kubernetes.io/projected/25d8ee46-11e2-460e-9417-93a2def9b519-kube-api-access-vp77k\") pod \"kube-storage-version-migrator-operator-6769c5d45-vtfgg\" (UID: \"25d8ee46-11e2-460e-9417-93a2def9b519\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.394656 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert podName:72bc8f52-06f5-4c26-b9dc-1db461cbb3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:49.89463099 +0000 UTC m=+205.834645126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-2xz9v" (UID: "72bc8f52-06f5-4c26-b9dc-1db461cbb3cd") : secret "networking-console-plugin-cert" not found Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.394692 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394699 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4w5z\" (UniqueName: \"kubernetes.io/projected/b7a4b38b-7cea-453f-b021-f118b3260fb4-kube-api-access-k4w5z\") pod \"network-check-source-8894fc9bd-b5c9q\" (UID: \"b7a4b38b-7cea-453f-b021-f118b3260fb4\") " pod="openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q" Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.394704 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-69557c4bff-g4kkh: secret "image-registry-tls" not found Apr 23 17:55:49.394799 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.394784 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls podName:37d5c9de-eb28-492f-8280-8ed7c5e432be nodeName:}" failed. No retries permitted until 2026-04-23 17:55:49.89477166 +0000 UTC m=+205.834785807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls") pod "image-registry-69557c4bff-g4kkh" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be") : secret "image-registry-tls" not found Apr 23 17:55:49.395628 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.394946 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-tmp\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.395628 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.395203 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-trusted-ca\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.395628 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.395492 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-snapshots\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.395776 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.395694 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:55:49.395828 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.395779 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.396237 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.396219 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25d8ee46-11e2-460e-9417-93a2def9b519-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-vtfgg\" (UID: \"25d8ee46-11e2-460e-9417-93a2def9b519\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.396856 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.396819 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-installation-pull-secrets\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.397066 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.397049 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-serving-cert\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.397647 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.397625 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-image-registry-private-configuration\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.406973 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.406954 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpwv4\" (UniqueName: \"kubernetes.io/projected/daa6f738-c430-4a35-9826-ea29f862b6fe-kube-api-access-jpwv4\") pod \"cni-sysctl-allowlist-ds-l82lj\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.407068 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.407034 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hv6h\" (UniqueName: \"kubernetes.io/projected/0da25b59-22b1-4319-b795-3e7d2bc7db04-kube-api-access-6hv6h\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:55:49.410929 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.410911 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrwm4\" (UniqueName: \"kubernetes.io/projected/1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba-kube-api-access-zrwm4\") pod \"insights-operator-585dfdc468-hd2z2\" (UID: \"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba\") " pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.416055 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.416029 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tt5q\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-kube-api-access-7tt5q\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.417128 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.417091 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nzxn\" (UniqueName: \"kubernetes.io/projected/128cf630-366d-4caf-a72e-dec2e74c92ba-kube-api-access-9nzxn\") pod \"volume-data-source-validator-7c6cbb6c87-2t8xp\" (UID: \"128cf630-366d-4caf-a72e-dec2e74c92ba\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp" Apr 23 17:55:49.418252 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.418228 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4w5z\" (UniqueName: \"kubernetes.io/projected/b7a4b38b-7cea-453f-b021-f118b3260fb4-kube-api-access-k4w5z\") pod \"network-check-source-8894fc9bd-b5c9q\" (UID: \"b7a4b38b-7cea-453f-b021-f118b3260fb4\") " pod="openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q" Apr 23 17:55:49.421711 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.421695 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" Apr 23 17:55:49.422650 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.422632 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp77k\" (UniqueName: \"kubernetes.io/projected/25d8ee46-11e2-460e-9417-93a2def9b519-kube-api-access-vp77k\") pod \"kube-storage-version-migrator-operator-6769c5d45-vtfgg\" (UID: \"25d8ee46-11e2-460e-9417-93a2def9b519\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.428432 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.428399 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-bound-sa-token\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.456785 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.456761 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:49.483824 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.483438 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp" Apr 23 17:55:49.496140 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.495895 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/912b083f-5a59-4119-91fd-47ba37c5ed53-tmp-dir\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.496140 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.496022 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wgk2w\" (UniqueName: \"kubernetes.io/projected/912b083f-5a59-4119-91fd-47ba37c5ed53-kube-api-access-wgk2w\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.496140 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.496061 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.496140 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.496113 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/912b083f-5a59-4119-91fd-47ba37c5ed53-config-volume\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.496429 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.496384 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/912b083f-5a59-4119-91fd-47ba37c5ed53-tmp-dir\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.496634 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.496611 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:55:49.496732 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.496634 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/912b083f-5a59-4119-91fd-47ba37c5ed53-config-volume\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.496732 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.496685 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls podName:912b083f-5a59-4119-91fd-47ba37c5ed53 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:49.996664767 +0000 UTC m=+205.936678888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls") pod "dns-default-mfdbd" (UID: "912b083f-5a59-4119-91fd-47ba37c5ed53") : secret "dns-default-metrics-tls" not found Apr 23 17:55:49.503780 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.503749 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" Apr 23 17:55:49.511988 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.511612 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-585dfdc468-hd2z2" Apr 23 17:55:49.513903 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.513878 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgk2w\" (UniqueName: \"kubernetes.io/projected/912b083f-5a59-4119-91fd-47ba37c5ed53-kube-api-access-wgk2w\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:49.558624 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.558302 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q" Apr 23 17:55:49.573490 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.573440 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk"] Apr 23 17:55:49.574225 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.573788 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:49.641409 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.641303 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-dzhrv"] Apr 23 17:55:49.650314 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:55:49.650281 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f006f82_b551_4a19_b684_091814d45d54.slice/crio-dd2758002ca85d634739d4462a96a9d164dbb147db4633835479483090cd571d WatchSource:0}: Error finding container dd2758002ca85d634739d4462a96a9d164dbb147db4633835479483090cd571d: Status 404 returned error can't find the container with id dd2758002ca85d634739d4462a96a9d164dbb147db4633835479483090cd571d Apr 23 17:55:49.708490 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.702427 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-ktkpp"] Apr 23 17:55:49.729201 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.729003 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp"] Apr 23 17:55:49.729364 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.729185 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:49.736761 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.735839 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-qcv89\"" Apr 23 17:55:49.764120 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.764069 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-585dfdc468-hd2z2"] Apr 23 17:55:49.766589 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.766554 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg"] Apr 23 17:55:49.773215 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:55:49.773189 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25d8ee46_11e2_460e_9417_93a2def9b519.slice/crio-79b9812fdb8d9b73ad08a9fb7524cbf71d19201085fa2a83b6d073b3d7f80034 WatchSource:0}: Error finding container 79b9812fdb8d9b73ad08a9fb7524cbf71d19201085fa2a83b6d073b3d7f80034: Status 404 returned error can't find the container with id 79b9812fdb8d9b73ad08a9fb7524cbf71d19201085fa2a83b6d073b3d7f80034 Apr 23 17:55:49.803323 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.803222 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a1d3fd4-24e2-48b9-866d-797b76b07e9e-tmp-dir\") pod \"node-resolver-ktkpp\" (UID: \"4a1d3fd4-24e2-48b9-866d-797b76b07e9e\") " pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:49.803323 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.803301 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.803504 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.803346 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:49.803504 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.803404 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvsr5\" (UniqueName: \"kubernetes.io/projected/4a1d3fd4-24e2-48b9-866d-797b76b07e9e-kube-api-access-jvsr5\") pod \"node-resolver-ktkpp\" (UID: \"4a1d3fd4-24e2-48b9-866d-797b76b07e9e\") " pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:49.803504 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.803463 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:50.803439137 +0000 UTC m=+206.743453269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : configmap references non-existent config key: service-ca.crt Apr 23 17:55:49.803673 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.803505 2570 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:55:49.803673 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.803545 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:50.803534015 +0000 UTC m=+206.743548134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : secret "router-metrics-certs-default" not found Apr 23 17:55:49.803673 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.803544 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q"] Apr 23 17:55:49.803673 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.803579 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:49.803673 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.803642 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:55:49.803917 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.803698 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a1d3fd4-24e2-48b9-866d-797b76b07e9e-hosts-file\") pod \"node-resolver-ktkpp\" (UID: \"4a1d3fd4-24e2-48b9-866d-797b76b07e9e\") " pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:49.803917 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.803713 2570 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:49.803917 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.803774 2570 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:55:49.803917 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.803782 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls podName:dcab8057-5f29-4508-9628-e8ee8882286b nodeName:}" failed. No retries permitted until 2026-04-23 17:55:50.803756957 +0000 UTC m=+206.743771076 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-7hdwh" (UID: "dcab8057-5f29-4508-9628-e8ee8882286b") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:49.803917 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.803817 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls podName:d39ff941-5050-456f-862b-ec6962d9c97c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:50.803809284 +0000 UTC m=+206.743823408 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-w65xt" (UID: "d39ff941-5050-456f-862b-ec6962d9c97c") : secret "samples-operator-tls" not found Apr 23 17:55:49.806362 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:55:49.806339 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7a4b38b_7cea_453f_b021_f118b3260fb4.slice/crio-845ed8f45c0352901cc76acbe96e49c81e40635baf2e465a2f99d25072b92b56 WatchSource:0}: Error finding container 845ed8f45c0352901cc76acbe96e49c81e40635baf2e465a2f99d25072b92b56: Status 404 returned error can't find the container with id 845ed8f45c0352901cc76acbe96e49c81e40635baf2e465a2f99d25072b92b56 Apr 23 17:55:49.904643 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.904608 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:49.904777 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.904658 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a1d3fd4-24e2-48b9-866d-797b76b07e9e-hosts-file\") pod \"node-resolver-ktkpp\" (UID: \"4a1d3fd4-24e2-48b9-866d-797b76b07e9e\") " pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:49.904777 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.904700 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:55:49.904777 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.904730 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a1d3fd4-24e2-48b9-866d-797b76b07e9e-tmp-dir\") pod \"node-resolver-ktkpp\" (UID: \"4a1d3fd4-24e2-48b9-866d-797b76b07e9e\") " pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:49.904777 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.904759 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:55:49.904974 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.904779 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-69557c4bff-g4kkh: secret "image-registry-tls" not found Apr 23 17:55:49.904974 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.904805 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jvsr5\" (UniqueName: \"kubernetes.io/projected/4a1d3fd4-24e2-48b9-866d-797b76b07e9e-kube-api-access-jvsr5\") pod \"node-resolver-ktkpp\" (UID: \"4a1d3fd4-24e2-48b9-866d-797b76b07e9e\") " pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:49.904974 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.904804 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a1d3fd4-24e2-48b9-866d-797b76b07e9e-hosts-file\") pod \"node-resolver-ktkpp\" (UID: \"4a1d3fd4-24e2-48b9-866d-797b76b07e9e\") " pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:49.904974 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.904830 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls podName:37d5c9de-eb28-492f-8280-8ed7c5e432be nodeName:}" failed. No retries permitted until 2026-04-23 17:55:50.904810516 +0000 UTC m=+206.844824647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls") pod "image-registry-69557c4bff-g4kkh" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be") : secret "image-registry-tls" not found Apr 23 17:55:49.904974 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.904839 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:55:49.904974 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.904889 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert podName:72bc8f52-06f5-4c26-b9dc-1db461cbb3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:50.904873453 +0000 UTC m=+206.844887574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-2xz9v" (UID: "72bc8f52-06f5-4c26-b9dc-1db461cbb3cd") : secret "networking-console-plugin-cert" not found Apr 23 17:55:49.904974 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.904971 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:55:49.905283 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.905086 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a1d3fd4-24e2-48b9-866d-797b76b07e9e-tmp-dir\") pod \"node-resolver-ktkpp\" (UID: \"4a1d3fd4-24e2-48b9-866d-797b76b07e9e\") " pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:49.905283 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.905098 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:55:49.905283 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:49.905171 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert podName:0da25b59-22b1-4319-b795-3e7d2bc7db04 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:50.905157343 +0000 UTC m=+206.845171474 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert") pod "ingress-canary-c94n2" (UID: "0da25b59-22b1-4319-b795-3e7d2bc7db04") : secret "canary-serving-cert" not found Apr 23 17:55:49.916685 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:49.916656 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvsr5\" (UniqueName: \"kubernetes.io/projected/4a1d3fd4-24e2-48b9-866d-797b76b07e9e-kube-api-access-jvsr5\") pod \"node-resolver-ktkpp\" (UID: \"4a1d3fd4-24e2-48b9-866d-797b76b07e9e\") " pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:50.006456 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.006426 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:50.006617 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.006600 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:55:50.006699 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.006664 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls podName:912b083f-5a59-4119-91fd-47ba37c5ed53 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:51.006645849 +0000 UTC m=+206.946659981 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls") pod "dns-default-mfdbd" (UID: "912b083f-5a59-4119-91fd-47ba37c5ed53") : secret "dns-default-metrics-tls" not found Apr 23 17:55:50.053055 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.053032 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ktkpp" Apr 23 17:55:50.060733 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.060695 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" event={"ID":"25d8ee46-11e2-460e-9417-93a2def9b519","Type":"ContainerStarted","Data":"79b9812fdb8d9b73ad08a9fb7524cbf71d19201085fa2a83b6d073b3d7f80034"} Apr 23 17:55:50.062285 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.062209 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp" event={"ID":"128cf630-366d-4caf-a72e-dec2e74c92ba","Type":"ContainerStarted","Data":"84c6e6d8e59e4cbe20e58cfab30253297e527a712f762aef0d1beda5dfd2832d"} Apr 23 17:55:50.065436 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.065207 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" event={"ID":"daa6f738-c430-4a35-9826-ea29f862b6fe","Type":"ContainerStarted","Data":"9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0"} Apr 23 17:55:50.065436 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.065240 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" event={"ID":"daa6f738-c430-4a35-9826-ea29f862b6fe","Type":"ContainerStarted","Data":"f7b19f369a5e0d1717413f8f8ac957cbe35b09b119e676c1bd42df60991b8c4f"} Apr 23 17:55:50.065764 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.065613 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:50.068170 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.068088 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" event={"ID":"3f006f82-b551-4a19-b684-091814d45d54","Type":"ContainerStarted","Data":"dd2758002ca85d634739d4462a96a9d164dbb147db4633835479483090cd571d"} Apr 23 17:55:50.070352 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.070322 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" event={"ID":"d78b9e26-7f79-4f68-ad87-4e237752c338","Type":"ContainerStarted","Data":"ccf9f2bd11b4226fcc8baecf6731a1f7704587395b2dc8a39e52d90054959216"} Apr 23 17:55:50.073467 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.072839 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q" event={"ID":"b7a4b38b-7cea-453f-b021-f118b3260fb4","Type":"ContainerStarted","Data":"845ed8f45c0352901cc76acbe96e49c81e40635baf2e465a2f99d25072b92b56"} Apr 23 17:55:50.073884 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:55:50.073849 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a1d3fd4_24e2_48b9_866d_797b76b07e9e.slice/crio-614c7adb78b34df5d49d03b1781cb36bd39fd5da995913967263cd8d95e2bb19 WatchSource:0}: Error finding container 614c7adb78b34df5d49d03b1781cb36bd39fd5da995913967263cd8d95e2bb19: Status 404 returned error can't find the container with id 614c7adb78b34df5d49d03b1781cb36bd39fd5da995913967263cd8d95e2bb19 Apr 23 17:55:50.074038 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.074003 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-hd2z2" event={"ID":"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba","Type":"ContainerStarted","Data":"a12936f8be5d22bcd6144e9953db81cf5987082f0e3d13ae3a5b92db3bb7b6d6"} Apr 23 17:55:50.078402 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.078241 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:55:50.095533 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.095480 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" podStartSLOduration=1.095467229 podStartE2EDuration="1.095467229s" podCreationTimestamp="2026-04-23 17:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:50.095122588 +0000 UTC m=+206.035136730" watchObservedRunningTime="2026-04-23 17:55:50.095467229 +0000 UTC m=+206.035481361" Apr 23 17:55:50.815452 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.814730 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:50.815452 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.814776 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:50.815452 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.814850 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:50.815452 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.814903 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:55:50.815452 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.815064 2570 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:55:50.815452 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.815125 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls podName:d39ff941-5050-456f-862b-ec6962d9c97c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:52.815106328 +0000 UTC m=+208.755120453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-w65xt" (UID: "d39ff941-5050-456f-862b-ec6962d9c97c") : secret "samples-operator-tls" not found Apr 23 17:55:50.816251 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.815558 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:52.815539616 +0000 UTC m=+208.755553748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : configmap references non-existent config key: service-ca.crt Apr 23 17:55:50.816251 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.815630 2570 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:55:50.816251 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.815664 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:52.815653084 +0000 UTC m=+208.755667208 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : secret "router-metrics-certs-default" not found Apr 23 17:55:50.816251 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.815714 2570 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:50.816251 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.815748 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls podName:dcab8057-5f29-4508-9628-e8ee8882286b nodeName:}" failed. No retries permitted until 2026-04-23 17:55:52.815739098 +0000 UTC m=+208.755753216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-7hdwh" (UID: "dcab8057-5f29-4508-9628-e8ee8882286b") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:50.916367 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.916204 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:50.916367 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.916279 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:55:50.916606 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:50.916408 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:55:50.916606 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.916529 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:55:50.916606 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.916581 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert podName:0da25b59-22b1-4319-b795-3e7d2bc7db04 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:52.916563935 +0000 UTC m=+208.856578053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert") pod "ingress-canary-c94n2" (UID: "0da25b59-22b1-4319-b795-3e7d2bc7db04") : secret "canary-serving-cert" not found Apr 23 17:55:50.917153 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.916911 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:55:50.917153 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.916928 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-69557c4bff-g4kkh: secret "image-registry-tls" not found Apr 23 17:55:50.917153 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.916968 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls podName:37d5c9de-eb28-492f-8280-8ed7c5e432be nodeName:}" failed. No retries permitted until 2026-04-23 17:55:52.91695479 +0000 UTC m=+208.856968914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls") pod "image-registry-69557c4bff-g4kkh" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be") : secret "image-registry-tls" not found Apr 23 17:55:50.917153 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.917021 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:55:50.917153 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:50.917055 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert podName:72bc8f52-06f5-4c26-b9dc-1db461cbb3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:52.917044322 +0000 UTC m=+208.857058447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-2xz9v" (UID: "72bc8f52-06f5-4c26-b9dc-1db461cbb3cd") : secret "networking-console-plugin-cert" not found Apr 23 17:55:51.018084 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:51.017464 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:51.018084 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:51.017753 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:55:51.018084 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:51.017804 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls podName:912b083f-5a59-4119-91fd-47ba37c5ed53 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:53.017788429 +0000 UTC m=+208.957802553 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls") pod "dns-default-mfdbd" (UID: "912b083f-5a59-4119-91fd-47ba37c5ed53") : secret "dns-default-metrics-tls" not found Apr 23 17:55:51.079899 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:51.079767 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ktkpp" event={"ID":"4a1d3fd4-24e2-48b9-866d-797b76b07e9e","Type":"ContainerStarted","Data":"e436997e34f6b22267f9dfe4c3009a1d2ef093849145a29bb61eefb432b61cde"} Apr 23 17:55:51.079899 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:51.079803 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ktkpp" event={"ID":"4a1d3fd4-24e2-48b9-866d-797b76b07e9e","Type":"ContainerStarted","Data":"614c7adb78b34df5d49d03b1781cb36bd39fd5da995913967263cd8d95e2bb19"} Apr 23 17:55:51.094988 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:51.093965 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ktkpp" podStartSLOduration=2.09394829 podStartE2EDuration="2.09394829s" podCreationTimestamp="2026-04-23 17:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:51.093902591 +0000 UTC m=+207.033916731" watchObservedRunningTime="2026-04-23 17:55:51.09394829 +0000 UTC m=+207.033962430" Apr 23 17:55:52.016263 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:52.016230 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l82lj"] Apr 23 17:55:52.837095 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:52.837057 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:52.837262 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:52.837110 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:52.837262 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:52.837188 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:52.837262 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.837215 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:56.837193096 +0000 UTC m=+212.777207237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : configmap references non-existent config key: service-ca.crt Apr 23 17:55:52.837400 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.837276 2570 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:52.837400 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.837278 2570 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:55:52.837400 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:52.837284 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:55:52.837400 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.837330 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls podName:dcab8057-5f29-4508-9628-e8ee8882286b nodeName:}" failed. No retries permitted until 2026-04-23 17:55:56.837314415 +0000 UTC m=+212.777328534 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-7hdwh" (UID: "dcab8057-5f29-4508-9628-e8ee8882286b") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:52.837400 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.837344 2570 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:55:52.837400 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.837363 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:56.837351918 +0000 UTC m=+212.777366038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : secret "router-metrics-certs-default" not found Apr 23 17:55:52.837400 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.837382 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls podName:d39ff941-5050-456f-862b-ec6962d9c97c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:56.837370414 +0000 UTC m=+212.777384535 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-w65xt" (UID: "d39ff941-5050-456f-862b-ec6962d9c97c") : secret "samples-operator-tls" not found Apr 23 17:55:52.938760 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:52.938728 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:55:52.938923 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:52.938772 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:52.938923 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:52.938808 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:55:52.938923 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.938880 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:55:52.938923 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.938914 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:55:52.939119 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.938928 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:55:52.939119 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.938947 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert podName:0da25b59-22b1-4319-b795-3e7d2bc7db04 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:56.938927036 +0000 UTC m=+212.878941157 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert") pod "ingress-canary-c94n2" (UID: "0da25b59-22b1-4319-b795-3e7d2bc7db04") : secret "canary-serving-cert" not found Apr 23 17:55:52.939119 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.938983 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert podName:72bc8f52-06f5-4c26-b9dc-1db461cbb3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:55:56.938959781 +0000 UTC m=+212.878973900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-2xz9v" (UID: "72bc8f52-06f5-4c26-b9dc-1db461cbb3cd") : secret "networking-console-plugin-cert" not found Apr 23 17:55:52.939119 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.938930 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-69557c4bff-g4kkh: secret "image-registry-tls" not found Apr 23 17:55:52.939119 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:52.939019 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls podName:37d5c9de-eb28-492f-8280-8ed7c5e432be nodeName:}" failed. No retries permitted until 2026-04-23 17:55:56.939009488 +0000 UTC m=+212.879023606 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls") pod "image-registry-69557c4bff-g4kkh" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be") : secret "image-registry-tls" not found Apr 23 17:55:53.040093 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:53.040071 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:53.040426 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:53.040209 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:55:53.040426 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:53.040271 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls podName:912b083f-5a59-4119-91fd-47ba37c5ed53 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:57.040252323 +0000 UTC m=+212.980266450 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls") pod "dns-default-mfdbd" (UID: "912b083f-5a59-4119-91fd-47ba37c5ed53") : secret "dns-default-metrics-tls" not found Apr 23 17:55:53.084156 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:53.084126 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" podUID="daa6f738-c430-4a35-9826-ea29f862b6fe" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" gracePeriod=30 Apr 23 17:55:56.091747 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.091713 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-jhssk" event={"ID":"2618335a-7a85-4193-a71d-15eaab4cb7f1","Type":"ContainerStarted","Data":"c346bf851e5b1aad5a9818025c6e7e779a971c657bddfde55c526d7e1a3579da"} Apr 23 17:55:56.092123 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.091911 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:55:56.093678 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.093657 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/0.log" Apr 23 17:55:56.093792 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.093695 2570 generic.go:358] "Generic (PLEG): container finished" podID="3f006f82-b551-4a19-b684-091814d45d54" containerID="0924d57fe7cd92933bfde2bf4451af5ac4097c126e8bd03dbd32f5865626bba2" exitCode=255 Apr 23 17:55:56.093792 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.093762 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" event={"ID":"3f006f82-b551-4a19-b684-091814d45d54","Type":"ContainerDied","Data":"0924d57fe7cd92933bfde2bf4451af5ac4097c126e8bd03dbd32f5865626bba2"} Apr 23 17:55:56.093954 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.093938 2570 scope.go:117] "RemoveContainer" containerID="0924d57fe7cd92933bfde2bf4451af5ac4097c126e8bd03dbd32f5865626bba2" Apr 23 17:55:56.095483 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.095104 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" event={"ID":"d78b9e26-7f79-4f68-ad87-4e237752c338","Type":"ContainerStarted","Data":"fb860307adc429b12dd3cdfc0eedace3ad8ae76d9f79ed260c7ee7bf35706bed"} Apr 23 17:55:56.096526 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.096495 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q" event={"ID":"b7a4b38b-7cea-453f-b021-f118b3260fb4","Type":"ContainerStarted","Data":"6b30e7d9275c435df98e2658b49d23237ec6857621d5dc1b27fe22480e269e6f"} Apr 23 17:55:56.098099 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.098061 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-hd2z2" event={"ID":"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba","Type":"ContainerStarted","Data":"cf19b10d7cd3b5a2e34ae0e79360c258afea6e89cf7ebb2d8f348a794397dfb9"} Apr 23 17:55:56.099865 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.099677 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" event={"ID":"25d8ee46-11e2-460e-9417-93a2def9b519","Type":"ContainerStarted","Data":"17c11b91eb75e6ce71488922b5f6450795a7d37b4c552ebf73e32b932de85680"} Apr 23 17:55:56.101477 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.101450 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp" event={"ID":"128cf630-366d-4caf-a72e-dec2e74c92ba","Type":"ContainerStarted","Data":"7d0c93b0dbf3f6b4d0c8873aa1fa96626c0f385de7d96ce46e4ce9f99a5fde1d"} Apr 23 17:55:56.120672 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.120279 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-jhssk" podStartSLOduration=33.113734993 podStartE2EDuration="40.12026346s" podCreationTimestamp="2026-04-23 17:55:16 +0000 UTC" firstStartedPulling="2026-04-23 17:55:48.75855909 +0000 UTC m=+204.698573220" lastFinishedPulling="2026-04-23 17:55:55.765087559 +0000 UTC m=+211.705101687" observedRunningTime="2026-04-23 17:55:56.119598533 +0000 UTC m=+212.059612674" watchObservedRunningTime="2026-04-23 17:55:56.12026346 +0000 UTC m=+212.060277601" Apr 23 17:55:56.199854 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.199800 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-585dfdc468-hd2z2" podStartSLOduration=219.207249793 podStartE2EDuration="3m45.19978283s" podCreationTimestamp="2026-04-23 17:52:11 +0000 UTC" firstStartedPulling="2026-04-23 17:55:49.772967003 +0000 UTC m=+205.712981126" lastFinishedPulling="2026-04-23 17:55:55.765500042 +0000 UTC m=+211.705514163" observedRunningTime="2026-04-23 17:55:56.197870395 +0000 UTC m=+212.137884537" watchObservedRunningTime="2026-04-23 17:55:56.19978283 +0000 UTC m=+212.139796971" Apr 23 17:55:56.217063 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.217014 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" podStartSLOduration=219.037703876 podStartE2EDuration="3m45.21699774s" podCreationTimestamp="2026-04-23 17:52:11 +0000 UTC" firstStartedPulling="2026-04-23 17:55:49.585506912 +0000 UTC m=+205.525521029" lastFinishedPulling="2026-04-23 17:55:55.764800773 +0000 UTC m=+211.704814893" observedRunningTime="2026-04-23 17:55:56.216467251 +0000 UTC m=+212.156481393" watchObservedRunningTime="2026-04-23 17:55:56.21699774 +0000 UTC m=+212.157011875" Apr 23 17:55:56.240087 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.240043 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-2t8xp" podStartSLOduration=219.202145512 podStartE2EDuration="3m45.240031099s" podCreationTimestamp="2026-04-23 17:52:11 +0000 UTC" firstStartedPulling="2026-04-23 17:55:49.726755352 +0000 UTC m=+205.666769482" lastFinishedPulling="2026-04-23 17:55:55.764640935 +0000 UTC m=+211.704655069" observedRunningTime="2026-04-23 17:55:56.239109319 +0000 UTC m=+212.179123482" watchObservedRunningTime="2026-04-23 17:55:56.240031099 +0000 UTC m=+212.180045293" Apr 23 17:55:56.265022 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.264982 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" podStartSLOduration=219.27615152 podStartE2EDuration="3m45.264970507s" podCreationTimestamp="2026-04-23 17:52:11 +0000 UTC" firstStartedPulling="2026-04-23 17:55:49.776092426 +0000 UTC m=+205.716106544" lastFinishedPulling="2026-04-23 17:55:55.764911407 +0000 UTC m=+211.704925531" observedRunningTime="2026-04-23 17:55:56.264145107 +0000 UTC m=+212.204159248" watchObservedRunningTime="2026-04-23 17:55:56.264970507 +0000 UTC m=+212.204984646" Apr 23 17:55:56.295676 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.295626 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-8894fc9bd-b5c9q" podStartSLOduration=35.329780976 podStartE2EDuration="41.295615361s" podCreationTimestamp="2026-04-23 17:55:15 +0000 UTC" firstStartedPulling="2026-04-23 17:55:49.80855889 +0000 UTC m=+205.748573009" lastFinishedPulling="2026-04-23 17:55:55.774393262 +0000 UTC m=+211.714407394" observedRunningTime="2026-04-23 17:55:56.295167893 +0000 UTC m=+212.235182034" watchObservedRunningTime="2026-04-23 17:55:56.295615361 +0000 UTC m=+212.235629503" Apr 23 17:55:56.875663 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.875636 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:56.875833 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.875672 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:55:56.875833 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.875726 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:55:56.875833 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.875761 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:55:56.875833 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.875822 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.875799021 +0000 UTC m=+220.815813146 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : configmap references non-existent config key: service-ca.crt Apr 23 17:55:56.876052 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.875872 2570 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:55:56.876052 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.875918 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls podName:d39ff941-5050-456f-862b-ec6962d9c97c nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.875904227 +0000 UTC m=+220.815918345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-w65xt" (UID: "d39ff941-5050-456f-862b-ec6962d9c97c") : secret "samples-operator-tls" not found Apr 23 17:55:56.876052 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.875872 2570 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:56.876052 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.875954 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls podName:dcab8057-5f29-4508-9628-e8ee8882286b nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.875946133 +0000 UTC m=+220.815960256 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-7hdwh" (UID: "dcab8057-5f29-4508-9628-e8ee8882286b") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:55:56.876052 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.876032 2570 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:55:56.876307 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.876064 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.876054305 +0000 UTC m=+220.816068423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : secret "router-metrics-certs-default" not found Apr 23 17:55:56.976788 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.976760 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:55:56.976949 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.976801 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:55:56.976949 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:56.976836 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:55:56.976949 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.976927 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:55:56.976949 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.976947 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-69557c4bff-g4kkh: secret "image-registry-tls" not found Apr 23 17:55:56.977142 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.976982 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:55:56.977142 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.976998 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls podName:37d5c9de-eb28-492f-8280-8ed7c5e432be nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.976980939 +0000 UTC m=+220.916995062 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls") pod "image-registry-69557c4bff-g4kkh" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be") : secret "image-registry-tls" not found Apr 23 17:55:56.977142 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.977027 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert podName:72bc8f52-06f5-4c26-b9dc-1db461cbb3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.97701205 +0000 UTC m=+220.917026179 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-2xz9v" (UID: "72bc8f52-06f5-4c26-b9dc-1db461cbb3cd") : secret "networking-console-plugin-cert" not found Apr 23 17:55:56.977348 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.977332 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:55:56.977395 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:56.977375 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert podName:0da25b59-22b1-4319-b795-3e7d2bc7db04 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.977363471 +0000 UTC m=+220.917377596 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert") pod "ingress-canary-c94n2" (UID: "0da25b59-22b1-4319-b795-3e7d2bc7db04") : secret "canary-serving-cert" not found Apr 23 17:55:57.077925 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:57.077852 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:55:57.078054 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:57.078005 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:55:57.078115 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:57.078067 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls podName:912b083f-5a59-4119-91fd-47ba37c5ed53 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:05.07804906 +0000 UTC m=+221.018063183 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls") pod "dns-default-mfdbd" (UID: "912b083f-5a59-4119-91fd-47ba37c5ed53") : secret "dns-default-metrics-tls" not found Apr 23 17:55:57.104961 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:57.104939 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/1.log" Apr 23 17:55:57.105514 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:57.105497 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/0.log" Apr 23 17:55:57.105633 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:57.105534 2570 generic.go:358] "Generic (PLEG): container finished" podID="3f006f82-b551-4a19-b684-091814d45d54" containerID="bca074dfd2592c3181d53593a2a5b3aed72744998c15f6b5438c8b0fb744a74c" exitCode=255 Apr 23 17:55:57.105693 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:57.105627 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" event={"ID":"3f006f82-b551-4a19-b684-091814d45d54","Type":"ContainerDied","Data":"bca074dfd2592c3181d53593a2a5b3aed72744998c15f6b5438c8b0fb744a74c"} Apr 23 17:55:57.105693 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:57.105660 2570 scope.go:117] "RemoveContainer" containerID="0924d57fe7cd92933bfde2bf4451af5ac4097c126e8bd03dbd32f5865626bba2" Apr 23 17:55:57.105877 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:57.105861 2570 scope.go:117] "RemoveContainer" containerID="bca074dfd2592c3181d53593a2a5b3aed72744998c15f6b5438c8b0fb744a74c" Apr 23 17:55:57.106203 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:57.106180 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-dzhrv_openshift-console-operator(3f006f82-b551-4a19-b684-091814d45d54)\"" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" podUID="3f006f82-b551-4a19-b684-091814d45d54" Apr 23 17:55:58.110185 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:58.110159 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/1.log" Apr 23 17:55:58.110641 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:58.110621 2570 scope.go:117] "RemoveContainer" containerID="bca074dfd2592c3181d53593a2a5b3aed72744998c15f6b5438c8b0fb744a74c" Apr 23 17:55:58.110834 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:58.110816 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-dzhrv_openshift-console-operator(3f006f82-b551-4a19-b684-091814d45d54)\"" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" podUID="3f006f82-b551-4a19-b684-091814d45d54" Apr 23 17:55:59.457054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.457025 2570 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:59.457054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.457055 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:55:59.457457 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.457354 2570 scope.go:117] "RemoveContainer" containerID="bca074dfd2592c3181d53593a2a5b3aed72744998c15f6b5438c8b0fb744a74c" Apr 23 17:55:59.457530 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:59.457514 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-dzhrv_openshift-console-operator(3f006f82-b551-4a19-b684-091814d45d54)\"" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" podUID="3f006f82-b551-4a19-b684-091814d45d54" Apr 23 17:55:59.550017 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.549986 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-6np7g"] Apr 23 17:55:59.554637 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.554616 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.557344 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.557323 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 23 17:55:59.557615 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.557597 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 23 17:55:59.557710 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.557637 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-5jzrb\"" Apr 23 17:55:59.564180 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.564160 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-6np7g"] Apr 23 17:55:59.595734 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.595712 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-ktkpp_4a1d3fd4-24e2-48b9-866d-797b76b07e9e/dns-node-resolver/0.log" Apr 23 17:55:59.703098 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.703070 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/39d1dfb2-3f50-4093-890b-69aef045ebb2-crio-socket\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.703206 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.703105 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/39d1dfb2-3f50-4093-890b-69aef045ebb2-data-volume\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.703252 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.703230 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxjb\" (UniqueName: \"kubernetes.io/projected/39d1dfb2-3f50-4093-890b-69aef045ebb2-kube-api-access-nsxjb\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.703313 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.703300 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/39d1dfb2-3f50-4093-890b-69aef045ebb2-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.703364 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.703321 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.804255 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.804197 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/39d1dfb2-3f50-4093-890b-69aef045ebb2-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.804255 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.804225 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.804255 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.804250 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/39d1dfb2-3f50-4093-890b-69aef045ebb2-crio-socket\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.804537 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.804266 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/39d1dfb2-3f50-4093-890b-69aef045ebb2-data-volume\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.804537 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.804335 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/39d1dfb2-3f50-4093-890b-69aef045ebb2-crio-socket\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.804537 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:59.804347 2570 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 23 17:55:59.804537 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.804344 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nsxjb\" (UniqueName: \"kubernetes.io/projected/39d1dfb2-3f50-4093-890b-69aef045ebb2-kube-api-access-nsxjb\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.804537 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:55:59.804439 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls podName:39d1dfb2-3f50-4093-890b-69aef045ebb2 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:00.304396488 +0000 UTC m=+216.244410625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls") pod "insights-runtime-extractor-6np7g" (UID: "39d1dfb2-3f50-4093-890b-69aef045ebb2") : secret "insights-runtime-extractor-tls" not found Apr 23 17:55:59.804851 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.804835 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/39d1dfb2-3f50-4093-890b-69aef045ebb2-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.805095 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.805081 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/39d1dfb2-3f50-4093-890b-69aef045ebb2-data-volume\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:55:59.816352 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:55:59.816332 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsxjb\" (UniqueName: \"kubernetes.io/projected/39d1dfb2-3f50-4093-890b-69aef045ebb2-kube-api-access-nsxjb\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:56:00.019708 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.019671 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-865cb79987-lvh7c"] Apr 23 17:56:00.022987 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.022957 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.025332 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.025311 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-t9jgf\"" Apr 23 17:56:00.025607 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.025584 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Apr 23 17:56:00.025852 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.025836 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Apr 23 17:56:00.025950 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.025935 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Apr 23 17:56:00.026005 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.025942 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Apr 23 17:56:00.032449 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.032428 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-865cb79987-lvh7c"] Apr 23 17:56:00.067379 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:00.067320 2570 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 23 17:56:00.068174 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:00.068147 2570 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 23 17:56:00.068963 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:00.068942 2570 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 23 17:56:00.069038 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:00.068973 2570 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" podUID="daa6f738-c430-4a35-9826-ea29f862b6fe" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Apr 23 17:56:00.107554 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.107533 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9psf4\" (UniqueName: \"kubernetes.io/projected/fcbc2a5c-4700-4022-bbfa-d8ca31436697-kube-api-access-9psf4\") pod \"service-ca-865cb79987-lvh7c\" (UID: \"fcbc2a5c-4700-4022-bbfa-d8ca31436697\") " pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.107656 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.107571 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fcbc2a5c-4700-4022-bbfa-d8ca31436697-signing-cabundle\") pod \"service-ca-865cb79987-lvh7c\" (UID: \"fcbc2a5c-4700-4022-bbfa-d8ca31436697\") " pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.107718 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.107678 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fcbc2a5c-4700-4022-bbfa-d8ca31436697-signing-key\") pod \"service-ca-865cb79987-lvh7c\" (UID: \"fcbc2a5c-4700-4022-bbfa-d8ca31436697\") " pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.208162 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.208140 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9psf4\" (UniqueName: \"kubernetes.io/projected/fcbc2a5c-4700-4022-bbfa-d8ca31436697-kube-api-access-9psf4\") pod \"service-ca-865cb79987-lvh7c\" (UID: \"fcbc2a5c-4700-4022-bbfa-d8ca31436697\") " pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.208271 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.208176 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fcbc2a5c-4700-4022-bbfa-d8ca31436697-signing-cabundle\") pod \"service-ca-865cb79987-lvh7c\" (UID: \"fcbc2a5c-4700-4022-bbfa-d8ca31436697\") " pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.208356 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.208332 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fcbc2a5c-4700-4022-bbfa-d8ca31436697-signing-key\") pod \"service-ca-865cb79987-lvh7c\" (UID: \"fcbc2a5c-4700-4022-bbfa-d8ca31436697\") " pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.208875 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.208850 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fcbc2a5c-4700-4022-bbfa-d8ca31436697-signing-cabundle\") pod \"service-ca-865cb79987-lvh7c\" (UID: \"fcbc2a5c-4700-4022-bbfa-d8ca31436697\") " pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.210595 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.210567 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fcbc2a5c-4700-4022-bbfa-d8ca31436697-signing-key\") pod \"service-ca-865cb79987-lvh7c\" (UID: \"fcbc2a5c-4700-4022-bbfa-d8ca31436697\") " pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.217771 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.217752 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9psf4\" (UniqueName: \"kubernetes.io/projected/fcbc2a5c-4700-4022-bbfa-d8ca31436697-kube-api-access-9psf4\") pod \"service-ca-865cb79987-lvh7c\" (UID: \"fcbc2a5c-4700-4022-bbfa-d8ca31436697\") " pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.309597 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.309557 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:56:00.309718 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:00.309681 2570 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 23 17:56:00.309765 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:00.309736 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls podName:39d1dfb2-3f50-4093-890b-69aef045ebb2 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:01.309721731 +0000 UTC m=+217.249735853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls") pod "insights-runtime-extractor-6np7g" (UID: "39d1dfb2-3f50-4093-890b-69aef045ebb2") : secret "insights-runtime-extractor-tls" not found Apr 23 17:56:00.331607 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.331562 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-865cb79987-lvh7c" Apr 23 17:56:00.394748 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.394724 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-7nkkq_65fafb1b-d34c-41dc-86f9-c06f2cf0487e/node-ca/0.log" Apr 23 17:56:00.442363 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:00.442336 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-865cb79987-lvh7c"] Apr 23 17:56:00.445523 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:56:00.445497 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcbc2a5c_4700_4022_bbfa_d8ca31436697.slice/crio-6a61e6672115ff1429a2642050021deb116703854c0ec7ac9efbf9c07c198d60 WatchSource:0}: Error finding container 6a61e6672115ff1429a2642050021deb116703854c0ec7ac9efbf9c07c198d60: Status 404 returned error can't find the container with id 6a61e6672115ff1429a2642050021deb116703854c0ec7ac9efbf9c07c198d60 Apr 23 17:56:01.119875 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:01.119840 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-865cb79987-lvh7c" event={"ID":"fcbc2a5c-4700-4022-bbfa-d8ca31436697","Type":"ContainerStarted","Data":"9dc5abb8fffd5ddda422db64e23d2cae0db90c843d59b711bd9915e1913a21a6"} Apr 23 17:56:01.119875 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:01.119874 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-865cb79987-lvh7c" event={"ID":"fcbc2a5c-4700-4022-bbfa-d8ca31436697","Type":"ContainerStarted","Data":"6a61e6672115ff1429a2642050021deb116703854c0ec7ac9efbf9c07c198d60"} Apr 23 17:56:01.133866 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:01.133824 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-865cb79987-lvh7c" podStartSLOduration=1.133808167 podStartE2EDuration="1.133808167s" podCreationTimestamp="2026-04-23 17:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:56:01.132666864 +0000 UTC m=+217.072681004" watchObservedRunningTime="2026-04-23 17:56:01.133808167 +0000 UTC m=+217.073822308" Apr 23 17:56:01.319169 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:01.319139 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:56:01.319312 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:01.319273 2570 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 23 17:56:01.319355 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:01.319332 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls podName:39d1dfb2-3f50-4093-890b-69aef045ebb2 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:03.319317682 +0000 UTC m=+219.259331800 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls") pod "insights-runtime-extractor-6np7g" (UID: "39d1dfb2-3f50-4093-890b-69aef045ebb2") : secret "insights-runtime-extractor-tls" not found Apr 23 17:56:03.337153 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:03.337116 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:56:03.337574 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:03.337282 2570 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 23 17:56:03.337574 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:03.337365 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls podName:39d1dfb2-3f50-4093-890b-69aef045ebb2 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:07.337344551 +0000 UTC m=+223.277358674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls") pod "insights-runtime-extractor-6np7g" (UID: "39d1dfb2-3f50-4093-890b-69aef045ebb2") : secret "insights-runtime-extractor-tls" not found Apr 23 17:56:04.952546 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:04.952513 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:56:04.952971 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:04.952568 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:56:04.952971 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:04.952685 2570 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 23 17:56:04.952971 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:04.952734 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:04.952971 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:04.952755 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls podName:dcab8057-5f29-4508-9628-e8ee8882286b nodeName:}" failed. No retries permitted until 2026-04-23 17:56:20.952734825 +0000 UTC m=+236.892748946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-75587bd455-7hdwh" (UID: "dcab8057-5f29-4508-9628-e8ee8882286b") : secret "cluster-monitoring-operator-tls" not found Apr 23 17:56:04.952971 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:04.952794 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:04.952971 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:04.952821 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:20.952804501 +0000 UTC m=+236.892818621 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : configmap references non-existent config key: service-ca.crt Apr 23 17:56:04.952971 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:04.952852 2570 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:56:04.952971 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:04.952895 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs podName:f9384d4c-b454-4102-b989-7bd167cee9f4 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:20.952883112 +0000 UTC m=+236.892897244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs") pod "router-default-64b76f7768-4tktg" (UID: "f9384d4c-b454-4102-b989-7bd167cee9f4") : secret "router-metrics-certs-default" not found Apr 23 17:56:04.955001 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:04.954971 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d39ff941-5050-456f-862b-ec6962d9c97c-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-w65xt\" (UID: \"d39ff941-5050-456f-862b-ec6962d9c97c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:56:05.045802 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:05.045776 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" Apr 23 17:56:05.054140 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:05.054112 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:56:05.054264 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:05.054169 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:56:05.054264 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:05.054233 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:56:05.054372 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:05.054275 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:56:05.054372 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:05.054337 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert podName:0da25b59-22b1-4319-b795-3e7d2bc7db04 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:21.054318038 +0000 UTC m=+236.994332164 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert") pod "ingress-canary-c94n2" (UID: "0da25b59-22b1-4319-b795-3e7d2bc7db04") : secret "canary-serving-cert" not found Apr 23 17:56:05.054372 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:05.054337 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 23 17:56:05.054372 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:05.054358 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-69557c4bff-g4kkh: secret "image-registry-tls" not found Apr 23 17:56:05.054591 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:05.054385 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 23 17:56:05.054591 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:05.054405 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls podName:37d5c9de-eb28-492f-8280-8ed7c5e432be nodeName:}" failed. No retries permitted until 2026-04-23 17:56:21.054389406 +0000 UTC m=+236.994403531 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls") pod "image-registry-69557c4bff-g4kkh" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be") : secret "image-registry-tls" not found Apr 23 17:56:05.054591 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:05.054450 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert podName:72bc8f52-06f5-4c26-b9dc-1db461cbb3cd nodeName:}" failed. No retries permitted until 2026-04-23 17:56:21.054436266 +0000 UTC m=+236.994450387 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert") pod "networking-console-plugin-cb95c66f6-2xz9v" (UID: "72bc8f52-06f5-4c26-b9dc-1db461cbb3cd") : secret "networking-console-plugin-cert" not found Apr 23 17:56:05.155673 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:05.155645 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:56:05.155831 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:05.155765 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:56:05.155891 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:05.155831 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls podName:912b083f-5a59-4119-91fd-47ba37c5ed53 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:21.155812824 +0000 UTC m=+237.095826962 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls") pod "dns-default-mfdbd" (UID: "912b083f-5a59-4119-91fd-47ba37c5ed53") : secret "dns-default-metrics-tls" not found Apr 23 17:56:05.162725 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:05.162701 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt"] Apr 23 17:56:06.135657 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:06.135611 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" event={"ID":"d39ff941-5050-456f-862b-ec6962d9c97c","Type":"ContainerStarted","Data":"66dc7e70659c10783448985670bd69ce45b9be62170c5b7717ea660be8c338a9"} Apr 23 17:56:07.140075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:07.140042 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" event={"ID":"d39ff941-5050-456f-862b-ec6962d9c97c","Type":"ContainerStarted","Data":"8468da35350e4dab445c97b8c243ebf8d13121a336002127e7f6041df165e92e"} Apr 23 17:56:07.140075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:07.140077 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" event={"ID":"d39ff941-5050-456f-862b-ec6962d9c97c","Type":"ContainerStarted","Data":"d1052a04e75f3fe10418a9cd98f7627bbb1cf0a6c4aabaeb88bba97a17605c29"} Apr 23 17:56:07.153533 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:07.153489 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-w65xt" podStartSLOduration=234.781899132 podStartE2EDuration="3m56.153477411s" podCreationTimestamp="2026-04-23 17:52:11 +0000 UTC" firstStartedPulling="2026-04-23 17:56:05.205347775 +0000 UTC m=+221.145361893" lastFinishedPulling="2026-04-23 17:56:06.576926054 +0000 UTC m=+222.516940172" observedRunningTime="2026-04-23 17:56:07.152798337 +0000 UTC m=+223.092812477" watchObservedRunningTime="2026-04-23 17:56:07.153477411 +0000 UTC m=+223.093491550" Apr 23 17:56:07.375098 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:07.375061 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:56:07.375270 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:07.375253 2570 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 23 17:56:07.375340 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:07.375324 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls podName:39d1dfb2-3f50-4093-890b-69aef045ebb2 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:15.375302346 +0000 UTC m=+231.315316470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls") pod "insights-runtime-extractor-6np7g" (UID: "39d1dfb2-3f50-4093-890b-69aef045ebb2") : secret "insights-runtime-extractor-tls" not found Apr 23 17:56:07.779053 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:07.779010 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:56:07.779218 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:07.779148 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 23 17:56:07.779218 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:07.779209 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs podName:c339fa02-2445-42c2-b7ee-5388fb338129 nodeName:}" failed. No retries permitted until 2026-04-23 17:57:11.779190869 +0000 UTC m=+287.719204987 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs") pod "network-metrics-daemon-757d7" (UID: "c339fa02-2445-42c2-b7ee-5388fb338129") : secret "metrics-daemon-secret" not found Apr 23 17:56:10.066833 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:10.066796 2570 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 23 17:56:10.067676 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:10.067646 2570 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 23 17:56:10.068492 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:10.068463 2570 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 23 17:56:10.068575 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:10.068504 2570 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" podUID="daa6f738-c430-4a35-9826-ea29f862b6fe" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Apr 23 17:56:13.673066 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:13.673033 2570 scope.go:117] "RemoveContainer" containerID="bca074dfd2592c3181d53593a2a5b3aed72744998c15f6b5438c8b0fb744a74c" Apr 23 17:56:14.161275 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:14.161209 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 17:56:14.161664 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:14.161647 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/1.log" Apr 23 17:56:14.161726 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:14.161684 2570 generic.go:358] "Generic (PLEG): container finished" podID="3f006f82-b551-4a19-b684-091814d45d54" containerID="d84ef3aa6e31df2ac9a04163496a43fd8cc0315e77b6582b8ff7e97917abd859" exitCode=255 Apr 23 17:56:14.161726 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:14.161715 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" event={"ID":"3f006f82-b551-4a19-b684-091814d45d54","Type":"ContainerDied","Data":"d84ef3aa6e31df2ac9a04163496a43fd8cc0315e77b6582b8ff7e97917abd859"} Apr 23 17:56:14.161789 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:14.161744 2570 scope.go:117] "RemoveContainer" containerID="bca074dfd2592c3181d53593a2a5b3aed72744998c15f6b5438c8b0fb744a74c" Apr 23 17:56:14.162134 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:14.162114 2570 scope.go:117] "RemoveContainer" containerID="d84ef3aa6e31df2ac9a04163496a43fd8cc0315e77b6582b8ff7e97917abd859" Apr 23 17:56:14.162317 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:14.162297 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-9d4b6777b-dzhrv_openshift-console-operator(3f006f82-b551-4a19-b684-091814d45d54)\"" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" podUID="3f006f82-b551-4a19-b684-091814d45d54" Apr 23 17:56:15.165912 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:15.165882 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 17:56:15.441079 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:15.441053 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:56:15.443393 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:15.443371 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/39d1dfb2-3f50-4093-890b-69aef045ebb2-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-6np7g\" (UID: \"39d1dfb2-3f50-4093-890b-69aef045ebb2\") " pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:56:15.463386 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:15.463361 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-6np7g" Apr 23 17:56:15.581507 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:15.581481 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-6np7g"] Apr 23 17:56:15.584451 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:56:15.584402 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39d1dfb2_3f50_4093_890b_69aef045ebb2.slice/crio-fa6b6e4fc8d844d83624daadd1f6e46ccb8b3130eceb496ee067a7de2e20f18c WatchSource:0}: Error finding container fa6b6e4fc8d844d83624daadd1f6e46ccb8b3130eceb496ee067a7de2e20f18c: Status 404 returned error can't find the container with id fa6b6e4fc8d844d83624daadd1f6e46ccb8b3130eceb496ee067a7de2e20f18c Apr 23 17:56:16.169605 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:16.169569 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-6np7g" event={"ID":"39d1dfb2-3f50-4093-890b-69aef045ebb2","Type":"ContainerStarted","Data":"a20c501170e2ce657e73337694b322bc9b923193860ecfd6194ab5c898e5e203"} Apr 23 17:56:16.169970 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:16.169612 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-6np7g" event={"ID":"39d1dfb2-3f50-4093-890b-69aef045ebb2","Type":"ContainerStarted","Data":"fa6b6e4fc8d844d83624daadd1f6e46ccb8b3130eceb496ee067a7de2e20f18c"} Apr 23 17:56:17.173825 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:17.173781 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-6np7g" event={"ID":"39d1dfb2-3f50-4093-890b-69aef045ebb2","Type":"ContainerStarted","Data":"836dc67badf07ce206a24901fe2b3f2236c829b0945a82e5fd600fdafe7c9fa0"} Apr 23 17:56:18.064478 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:18.064449 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8tv94" Apr 23 17:56:18.177508 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:18.177479 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-6np7g" event={"ID":"39d1dfb2-3f50-4093-890b-69aef045ebb2","Type":"ContainerStarted","Data":"845e371325d69be5a93becaeb2eeaa783b5e180db38577d630a360980d87b092"} Apr 23 17:56:18.194764 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:18.194724 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-6np7g" podStartSLOduration=17.241323383 podStartE2EDuration="19.194710594s" podCreationTimestamp="2026-04-23 17:55:59 +0000 UTC" firstStartedPulling="2026-04-23 17:56:15.635504209 +0000 UTC m=+231.575518326" lastFinishedPulling="2026-04-23 17:56:17.588891403 +0000 UTC m=+233.528905537" observedRunningTime="2026-04-23 17:56:18.193944356 +0000 UTC m=+234.133958497" watchObservedRunningTime="2026-04-23 17:56:18.194710594 +0000 UTC m=+234.134724734" Apr 23 17:56:19.457430 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:19.457392 2570 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:56:19.457919 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:19.457446 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:56:19.457919 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:19.457744 2570 scope.go:117] "RemoveContainer" containerID="d84ef3aa6e31df2ac9a04163496a43fd8cc0315e77b6582b8ff7e97917abd859" Apr 23 17:56:19.458030 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:19.457926 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-9d4b6777b-dzhrv_openshift-console-operator(3f006f82-b551-4a19-b684-091814d45d54)\"" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" podUID="3f006f82-b551-4a19-b684-091814d45d54" Apr 23 17:56:20.067442 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:20.067378 2570 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 23 17:56:20.068535 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:20.068486 2570 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 23 17:56:20.069504 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:20.069474 2570 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 23 17:56:20.069579 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:20.069517 2570 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" podUID="daa6f738-c430-4a35-9826-ea29f862b6fe" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Apr 23 17:56:20.981992 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:20.981953 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:20.981992 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:20.981993 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:20.982572 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:20.982041 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:56:20.982613 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:20.982570 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9384d4c-b454-4102-b989-7bd167cee9f4-service-ca-bundle\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:20.984786 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:20.984755 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dcab8057-5f29-4508-9628-e8ee8882286b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-75587bd455-7hdwh\" (UID: \"dcab8057-5f29-4508-9628-e8ee8882286b\") " pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:56:20.984910 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:20.984818 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9384d4c-b454-4102-b989-7bd167cee9f4-metrics-certs\") pod \"router-default-64b76f7768-4tktg\" (UID: \"f9384d4c-b454-4102-b989-7bd167cee9f4\") " pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:21.082481 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.082451 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:56:21.082602 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.082491 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:56:21.082602 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.082531 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:56:21.085007 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.084977 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") pod \"image-registry-69557c4bff-g4kkh\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:56:21.085108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.084977 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0da25b59-22b1-4319-b795-3e7d2bc7db04-cert\") pod \"ingress-canary-c94n2\" (UID: \"0da25b59-22b1-4319-b795-3e7d2bc7db04\") " pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:56:21.085108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.085041 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/72bc8f52-06f5-4c26-b9dc-1db461cbb3cd-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-2xz9v\" (UID: \"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:56:21.182879 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.182853 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:56:21.185031 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.185005 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/912b083f-5a59-4119-91fd-47ba37c5ed53-metrics-tls\") pod \"dns-default-mfdbd\" (UID: \"912b083f-5a59-4119-91fd-47ba37c5ed53\") " pod="openshift-dns/dns-default-mfdbd" Apr 23 17:56:21.232402 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.232353 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" Apr 23 17:56:21.268661 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.268631 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:21.323323 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.323292 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:56:21.331211 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.331142 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" Apr 23 17:56:21.343143 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.343120 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-c94n2" Apr 23 17:56:21.357162 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.357106 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh"] Apr 23 17:56:21.361712 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:56:21.361658 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcab8057_5f29_4508_9628_e8ee8882286b.slice/crio-62c5f2067cc47ce6523ae2e3118568c4070bd7d688b79688a588e7154264a933 WatchSource:0}: Error finding container 62c5f2067cc47ce6523ae2e3118568c4070bd7d688b79688a588e7154264a933: Status 404 returned error can't find the container with id 62c5f2067cc47ce6523ae2e3118568c4070bd7d688b79688a588e7154264a933 Apr 23 17:56:21.384345 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.383255 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mfdbd" Apr 23 17:56:21.413848 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.413783 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-64b76f7768-4tktg"] Apr 23 17:56:21.420153 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:56:21.420113 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9384d4c_b454_4102_b989_7bd167cee9f4.slice/crio-2274210a5e7e8f0dacca79dc967a623247587afaa21bb05233e248d6d141d2dc WatchSource:0}: Error finding container 2274210a5e7e8f0dacca79dc967a623247587afaa21bb05233e248d6d141d2dc: Status 404 returned error can't find the container with id 2274210a5e7e8f0dacca79dc967a623247587afaa21bb05233e248d6d141d2dc Apr 23 17:56:21.498765 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.495541 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v"] Apr 23 17:56:21.505430 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.502330 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-69557c4bff-g4kkh"] Apr 23 17:56:21.540597 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.540556 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-c94n2"] Apr 23 17:56:21.544979 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:56:21.544871 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0da25b59_22b1_4319_b795_3e7d2bc7db04.slice/crio-ceaf1127e477caae4b51e60123c0f7745b729775131ef4c16737f2c548aaf12b WatchSource:0}: Error finding container ceaf1127e477caae4b51e60123c0f7745b729775131ef4c16737f2c548aaf12b: Status 404 returned error can't find the container with id ceaf1127e477caae4b51e60123c0f7745b729775131ef4c16737f2c548aaf12b Apr 23 17:56:21.574073 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:21.574046 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mfdbd"] Apr 23 17:56:21.576764 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:56:21.576736 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod912b083f_5a59_4119_91fd_47ba37c5ed53.slice/crio-4737eefc4c0664838b68700173b99006339c100cf6104e9c866af41579cbad07 WatchSource:0}: Error finding container 4737eefc4c0664838b68700173b99006339c100cf6104e9c866af41579cbad07: Status 404 returned error can't find the container with id 4737eefc4c0664838b68700173b99006339c100cf6104e9c866af41579cbad07 Apr 23 17:56:22.190439 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.190077 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" event={"ID":"37d5c9de-eb28-492f-8280-8ed7c5e432be","Type":"ContainerStarted","Data":"a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9"} Apr 23 17:56:22.190439 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.190118 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" event={"ID":"37d5c9de-eb28-492f-8280-8ed7c5e432be","Type":"ContainerStarted","Data":"44f0a5feb01841e3ab9b11f8134d65724c55044948dc458a1b3bce283cd0c11b"} Apr 23 17:56:22.191855 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.191069 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:56:22.195111 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.195064 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mfdbd" event={"ID":"912b083f-5a59-4119-91fd-47ba37c5ed53","Type":"ContainerStarted","Data":"4737eefc4c0664838b68700173b99006339c100cf6104e9c866af41579cbad07"} Apr 23 17:56:22.199262 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.199205 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-c94n2" event={"ID":"0da25b59-22b1-4319-b795-3e7d2bc7db04","Type":"ContainerStarted","Data":"ceaf1127e477caae4b51e60123c0f7745b729775131ef4c16737f2c548aaf12b"} Apr 23 17:56:22.200885 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.200835 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-64b76f7768-4tktg" event={"ID":"f9384d4c-b454-4102-b989-7bd167cee9f4","Type":"ContainerStarted","Data":"7a903f8d4952726d8157d614ae6c2e7e0776a241a688220fbfde617ca2fb6c32"} Apr 23 17:56:22.200885 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.200866 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-64b76f7768-4tktg" event={"ID":"f9384d4c-b454-4102-b989-7bd167cee9f4","Type":"ContainerStarted","Data":"2274210a5e7e8f0dacca79dc967a623247587afaa21bb05233e248d6d141d2dc"} Apr 23 17:56:22.202642 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.202615 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" event={"ID":"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd","Type":"ContainerStarted","Data":"46c73e4c5bb3d35d65ddcb49adcece8cf224eade826c6e1d2ffec7ef80a2d247"} Apr 23 17:56:22.205107 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.205067 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" event={"ID":"dcab8057-5f29-4508-9628-e8ee8882286b","Type":"ContainerStarted","Data":"62c5f2067cc47ce6523ae2e3118568c4070bd7d688b79688a588e7154264a933"} Apr 23 17:56:22.229493 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.229448 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" podStartSLOduration=251.229433517 podStartE2EDuration="4m11.229433517s" podCreationTimestamp="2026-04-23 17:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:56:22.228297403 +0000 UTC m=+238.168311569" watchObservedRunningTime="2026-04-23 17:56:22.229433517 +0000 UTC m=+238.169447662" Apr 23 17:56:22.240861 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.240821 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-69557c4bff-g4kkh"] Apr 23 17:56:22.253672 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.253457 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-64b76f7768-4tktg" podStartSLOduration=251.253447141 podStartE2EDuration="4m11.253447141s" podCreationTimestamp="2026-04-23 17:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:56:22.253338815 +0000 UTC m=+238.193352956" watchObservedRunningTime="2026-04-23 17:56:22.253447141 +0000 UTC m=+238.193461286" Apr 23 17:56:22.269533 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.269318 2570 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:22.272905 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:22.272685 2570 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:23.208899 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:23.208869 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:23.210208 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:23.210184 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-64b76f7768-4tktg" Apr 23 17:56:24.080092 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.080067 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-l82lj_daa6f738-c430-4a35-9826-ea29f862b6fe/kube-multus-additional-cni-plugins/0.log" Apr 23 17:56:24.080185 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.080138 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:56:24.111981 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.111736 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/daa6f738-c430-4a35-9826-ea29f862b6fe-tuning-conf-dir\") pod \"daa6f738-c430-4a35-9826-ea29f862b6fe\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " Apr 23 17:56:24.111981 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.111782 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/daa6f738-c430-4a35-9826-ea29f862b6fe-cni-sysctl-allowlist\") pod \"daa6f738-c430-4a35-9826-ea29f862b6fe\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " Apr 23 17:56:24.111981 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.111786 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/daa6f738-c430-4a35-9826-ea29f862b6fe-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "daa6f738-c430-4a35-9826-ea29f862b6fe" (UID: "daa6f738-c430-4a35-9826-ea29f862b6fe"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 17:56:24.111981 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.111857 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpwv4\" (UniqueName: \"kubernetes.io/projected/daa6f738-c430-4a35-9826-ea29f862b6fe-kube-api-access-jpwv4\") pod \"daa6f738-c430-4a35-9826-ea29f862b6fe\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " Apr 23 17:56:24.111981 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.111921 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/daa6f738-c430-4a35-9826-ea29f862b6fe-ready\") pod \"daa6f738-c430-4a35-9826-ea29f862b6fe\" (UID: \"daa6f738-c430-4a35-9826-ea29f862b6fe\") " Apr 23 17:56:24.112277 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.112134 2570 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/daa6f738-c430-4a35-9826-ea29f862b6fe-tuning-conf-dir\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:24.112519 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.112376 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daa6f738-c430-4a35-9826-ea29f862b6fe-ready" (OuterVolumeSpecName: "ready") pod "daa6f738-c430-4a35-9826-ea29f862b6fe" (UID: "daa6f738-c430-4a35-9826-ea29f862b6fe"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:56:24.112519 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.112489 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daa6f738-c430-4a35-9826-ea29f862b6fe-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "daa6f738-c430-4a35-9826-ea29f862b6fe" (UID: "daa6f738-c430-4a35-9826-ea29f862b6fe"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:24.114637 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.114613 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa6f738-c430-4a35-9826-ea29f862b6fe-kube-api-access-jpwv4" (OuterVolumeSpecName: "kube-api-access-jpwv4") pod "daa6f738-c430-4a35-9826-ea29f862b6fe" (UID: "daa6f738-c430-4a35-9826-ea29f862b6fe"). InnerVolumeSpecName "kube-api-access-jpwv4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:24.212741 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.212720 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-l82lj_daa6f738-c430-4a35-9826-ea29f862b6fe/kube-multus-additional-cni-plugins/0.log" Apr 23 17:56:24.213088 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.212759 2570 generic.go:358] "Generic (PLEG): container finished" podID="daa6f738-c430-4a35-9826-ea29f862b6fe" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" exitCode=137 Apr 23 17:56:24.213088 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.212852 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" event={"ID":"daa6f738-c430-4a35-9826-ea29f862b6fe","Type":"ContainerDied","Data":"9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0"} Apr 23 17:56:24.213088 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.212893 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" event={"ID":"daa6f738-c430-4a35-9826-ea29f862b6fe","Type":"ContainerDied","Data":"f7b19f369a5e0d1717413f8f8ac957cbe35b09b119e676c1bd42df60991b8c4f"} Apr 23 17:56:24.213088 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.212907 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l82lj" Apr 23 17:56:24.213088 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.212915 2570 scope.go:117] "RemoveContainer" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" Apr 23 17:56:24.213088 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.213004 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jpwv4\" (UniqueName: \"kubernetes.io/projected/daa6f738-c430-4a35-9826-ea29f862b6fe-kube-api-access-jpwv4\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:24.213377 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.213170 2570 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/daa6f738-c430-4a35-9826-ea29f862b6fe-ready\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:24.213377 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.213222 2570 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/daa6f738-c430-4a35-9826-ea29f862b6fe-cni-sysctl-allowlist\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:24.215772 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.215701 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" event={"ID":"72bc8f52-06f5-4c26-b9dc-1db461cbb3cd","Type":"ContainerStarted","Data":"5f807a8686c87515baebe294cb05184114830abcb6e7d7a6685f70ad2c0535aa"} Apr 23 17:56:24.218140 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.218115 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" event={"ID":"dcab8057-5f29-4508-9628-e8ee8882286b","Type":"ContainerStarted","Data":"42650f4874191bf6ee6d18ea70f07225b4ec1244fbfc3c60c92bbc2065e9a59f"} Apr 23 17:56:24.220307 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.219347 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mfdbd" event={"ID":"912b083f-5a59-4119-91fd-47ba37c5ed53","Type":"ContainerStarted","Data":"b5ea1080e58634df2ec4d65d0d0e0ca4371cb205c133e14c713058347a5945b0"} Apr 23 17:56:24.222973 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.222520 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-c94n2" event={"ID":"0da25b59-22b1-4319-b795-3e7d2bc7db04","Type":"ContainerStarted","Data":"68dee27b49d48b6d738e1fb0d9d76a5480deadee602c431fa183ef20b87e609e"} Apr 23 17:56:24.226470 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.225914 2570 scope.go:117] "RemoveContainer" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" Apr 23 17:56:24.226470 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:24.226200 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0\": container with ID starting with 9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0 not found: ID does not exist" containerID="9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0" Apr 23 17:56:24.226470 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.226237 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0"} err="failed to get container status \"9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0\": rpc error: code = NotFound desc = could not find container \"9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0\": container with ID starting with 9075b191fc259927d28743bd959947151acec2ce67c1e2aa5508429f1dc1b5c0 not found: ID does not exist" Apr 23 17:56:24.232895 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.232641 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-cb95c66f6-2xz9v" podStartSLOduration=61.661115291 podStartE2EDuration="1m4.232627891s" podCreationTimestamp="2026-04-23 17:55:20 +0000 UTC" firstStartedPulling="2026-04-23 17:56:21.491529929 +0000 UTC m=+237.431544061" lastFinishedPulling="2026-04-23 17:56:24.063042528 +0000 UTC m=+240.003056661" observedRunningTime="2026-04-23 17:56:24.231618578 +0000 UTC m=+240.171632718" watchObservedRunningTime="2026-04-23 17:56:24.232627891 +0000 UTC m=+240.172642029" Apr 23 17:56:24.248054 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.248006 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-75587bd455-7hdwh" podStartSLOduration=250.543143468 podStartE2EDuration="4m13.247988389s" podCreationTimestamp="2026-04-23 17:52:11 +0000 UTC" firstStartedPulling="2026-04-23 17:56:21.364145812 +0000 UTC m=+237.304159942" lastFinishedPulling="2026-04-23 17:56:24.068990742 +0000 UTC m=+240.009004863" observedRunningTime="2026-04-23 17:56:24.246138787 +0000 UTC m=+240.186152964" watchObservedRunningTime="2026-04-23 17:56:24.247988389 +0000 UTC m=+240.188002532" Apr 23 17:56:24.282247 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.282175 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l82lj"] Apr 23 17:56:24.293019 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.292995 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l82lj"] Apr 23 17:56:24.302926 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.302880 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-c94n2" podStartSLOduration=32.775681997 podStartE2EDuration="35.302864661s" podCreationTimestamp="2026-04-23 17:55:49 +0000 UTC" firstStartedPulling="2026-04-23 17:56:21.547247382 +0000 UTC m=+237.487261500" lastFinishedPulling="2026-04-23 17:56:24.074430047 +0000 UTC m=+240.014444164" observedRunningTime="2026-04-23 17:56:24.302747867 +0000 UTC m=+240.242762208" watchObservedRunningTime="2026-04-23 17:56:24.302864661 +0000 UTC m=+240.242878803" Apr 23 17:56:24.678061 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:24.678035 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daa6f738-c430-4a35-9826-ea29f862b6fe" path="/var/lib/kubelet/pods/daa6f738-c430-4a35-9826-ea29f862b6fe/volumes" Apr 23 17:56:25.226947 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:25.226896 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mfdbd" event={"ID":"912b083f-5a59-4119-91fd-47ba37c5ed53","Type":"ContainerStarted","Data":"7cd0cbb8d2ee61bf088d61a7be8eef69d882a5043edce0c3a97486136d59ed20"} Apr 23 17:56:25.247783 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:25.247733 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mfdbd" podStartSLOduration=33.758557503 podStartE2EDuration="36.247719595s" podCreationTimestamp="2026-04-23 17:55:49 +0000 UTC" firstStartedPulling="2026-04-23 17:56:21.578437914 +0000 UTC m=+237.518452031" lastFinishedPulling="2026-04-23 17:56:24.067600001 +0000 UTC m=+240.007614123" observedRunningTime="2026-04-23 17:56:25.246666012 +0000 UTC m=+241.186680152" watchObservedRunningTime="2026-04-23 17:56:25.247719595 +0000 UTC m=+241.187733735" Apr 23 17:56:26.229867 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:26.229840 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-mfdbd" Apr 23 17:56:27.109405 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:27.109372 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-jhssk" Apr 23 17:56:32.060141 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.060107 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-725tl"] Apr 23 17:56:32.060515 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.060365 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="daa6f738-c430-4a35-9826-ea29f862b6fe" containerName="kube-multus-additional-cni-plugins" Apr 23 17:56:32.060515 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.060380 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="daa6f738-c430-4a35-9826-ea29f862b6fe" containerName="kube-multus-additional-cni-plugins" Apr 23 17:56:32.060515 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.060460 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="daa6f738-c430-4a35-9826-ea29f862b6fe" containerName="kube-multus-additional-cni-plugins" Apr 23 17:56:32.066967 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.066947 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.069109 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.069091 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 23 17:56:32.069695 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.069675 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-qddvv\"" Apr 23 17:56:32.070044 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.070028 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 23 17:56:32.070191 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.070169 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 23 17:56:32.071053 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.071036 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 23 17:56:32.172360 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.172333 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-tls\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.172476 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.172379 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-wtmp\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.172476 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.172407 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-sys\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.172476 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.172454 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-textfile\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.172592 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.172477 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-root\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.172592 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.172540 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-accelerators-collector-config\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.172592 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.172571 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.172702 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.172600 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-metrics-client-ca\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.172702 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.172624 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vkp7\" (UniqueName: \"kubernetes.io/projected/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-kube-api-access-7vkp7\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.273423 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273383 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-tls\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.273541 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273437 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-wtmp\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.273541 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273457 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-sys\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.273541 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273482 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-textfile\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.273541 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273512 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-root\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.273746 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273574 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-accelerators-collector-config\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.273746 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273596 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-wtmp\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.273746 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273605 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.273746 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273648 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-metrics-client-ca\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.273746 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273665 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vkp7\" (UniqueName: \"kubernetes.io/projected/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-kube-api-access-7vkp7\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.274017 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.273990 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-textfile\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.274110 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.274091 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-sys\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.274296 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.274246 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-root\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.274492 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.274473 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-accelerators-collector-config\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.274635 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.274614 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-metrics-client-ca\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.276031 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.276003 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.276211 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.276187 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-node-exporter-tls\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.282646 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.282623 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vkp7\" (UniqueName: \"kubernetes.io/projected/8e4641e7-2db7-4e32-bcc3-f0998ca0f38e-kube-api-access-7vkp7\") pod \"node-exporter-725tl\" (UID: \"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e\") " pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.376847 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:32.376799 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-725tl" Apr 23 17:56:32.389109 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:56:32.386477 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e4641e7_2db7_4e32_bcc3_f0998ca0f38e.slice/crio-0ef36ad25f16249d8b4fe9bec7d70d0c7cac77672e177eecc18b54c19fe7634c WatchSource:0}: Error finding container 0ef36ad25f16249d8b4fe9bec7d70d0c7cac77672e177eecc18b54c19fe7634c: Status 404 returned error can't find the container with id 0ef36ad25f16249d8b4fe9bec7d70d0c7cac77672e177eecc18b54c19fe7634c Apr 23 17:56:33.248267 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:33.248237 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-725tl" event={"ID":"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e","Type":"ContainerStarted","Data":"0ef36ad25f16249d8b4fe9bec7d70d0c7cac77672e177eecc18b54c19fe7634c"} Apr 23 17:56:34.252101 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:34.252066 2570 generic.go:358] "Generic (PLEG): container finished" podID="8e4641e7-2db7-4e32-bcc3-f0998ca0f38e" containerID="efe8fe2d63c294a0c19ef1b48965655f7169ef8d0b75f19dc7ac5339be1eb53d" exitCode=0 Apr 23 17:56:34.252101 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:34.252102 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-725tl" event={"ID":"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e","Type":"ContainerDied","Data":"efe8fe2d63c294a0c19ef1b48965655f7169ef8d0b75f19dc7ac5339be1eb53d"} Apr 23 17:56:34.673801 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:34.673773 2570 scope.go:117] "RemoveContainer" containerID="d84ef3aa6e31df2ac9a04163496a43fd8cc0315e77b6582b8ff7e97917abd859" Apr 23 17:56:35.257923 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:35.257893 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 17:56:35.258382 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:35.257986 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" event={"ID":"3f006f82-b551-4a19-b684-091814d45d54","Type":"ContainerStarted","Data":"432efd40008a9cf919704e45f703ae6f291c867a0feae46039569670036c09f4"} Apr 23 17:56:35.258382 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:35.258303 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:56:35.260508 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:35.260485 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-725tl" event={"ID":"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e","Type":"ContainerStarted","Data":"47971eeebbfeee9df272ac0791982741c61fb1e4f724b107692eea8a478ebb46"} Apr 23 17:56:35.260508 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:35.260509 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-725tl" event={"ID":"8e4641e7-2db7-4e32-bcc3-f0998ca0f38e","Type":"ContainerStarted","Data":"5c2f27dab78be061d200b45268143cb2bf3c03ae29aef5ee06a8b4005dfaf638"} Apr 23 17:56:35.263308 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:35.263287 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" Apr 23 17:56:35.282970 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:35.282915 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-9d4b6777b-dzhrv" podStartSLOduration=258.169899106 podStartE2EDuration="4m24.282899216s" podCreationTimestamp="2026-04-23 17:52:11 +0000 UTC" firstStartedPulling="2026-04-23 17:55:49.652089823 +0000 UTC m=+205.592103956" lastFinishedPulling="2026-04-23 17:55:55.765089931 +0000 UTC m=+211.705104066" observedRunningTime="2026-04-23 17:56:35.282654343 +0000 UTC m=+251.222668484" watchObservedRunningTime="2026-04-23 17:56:35.282899216 +0000 UTC m=+251.222913359" Apr 23 17:56:35.307256 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:35.307219 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-725tl" podStartSLOduration=2.525658601 podStartE2EDuration="3.307206951s" podCreationTimestamp="2026-04-23 17:56:32 +0000 UTC" firstStartedPulling="2026-04-23 17:56:32.390136069 +0000 UTC m=+248.330150199" lastFinishedPulling="2026-04-23 17:56:33.171684417 +0000 UTC m=+249.111698549" observedRunningTime="2026-04-23 17:56:35.305803846 +0000 UTC m=+251.245817999" watchObservedRunningTime="2026-04-23 17:56:35.307206951 +0000 UTC m=+251.247221161" Apr 23 17:56:36.234340 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:36.234311 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mfdbd" Apr 23 17:56:44.228703 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:44.228575 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:56:45.974549 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:45.974514 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:56:45.980804 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:45.980781 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:56:45.987128 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:45.987108 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/90608d2c-b6cd-4fca-968e-9fc7cbf593f8-original-pull-secret\") pod \"global-pull-secret-syncer-mlz2c\" (UID: \"90608d2c-b6cd-4fca-968e-9fc7cbf593f8\") " pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:56:46.182481 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:46.182457 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mlz2c" Apr 23 17:56:46.297940 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:46.297911 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-mlz2c"] Apr 23 17:56:46.300756 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:56:46.300726 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90608d2c_b6cd_4fca_968e_9fc7cbf593f8.slice/crio-dfccb82c01e426501266f92b32bea8b999288575e8db4c3c52648cf8980dc067 WatchSource:0}: Error finding container dfccb82c01e426501266f92b32bea8b999288575e8db4c3c52648cf8980dc067: Status 404 returned error can't find the container with id dfccb82c01e426501266f92b32bea8b999288575e8db4c3c52648cf8980dc067 Apr 23 17:56:47.297295 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:47.297253 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-mlz2c" event={"ID":"90608d2c-b6cd-4fca-968e-9fc7cbf593f8","Type":"ContainerStarted","Data":"dfccb82c01e426501266f92b32bea8b999288575e8db4c3c52648cf8980dc067"} Apr 23 17:56:49.249902 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.249835 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" podUID="37d5c9de-eb28-492f-8280-8ed7c5e432be" containerName="registry" containerID="cri-o://a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9" gracePeriod=30 Apr 23 17:56:49.863228 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.863209 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:56:49.907173 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.907144 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") pod \"37d5c9de-eb28-492f-8280-8ed7c5e432be\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " Apr 23 17:56:49.907308 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.907182 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tt5q\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-kube-api-access-7tt5q\") pod \"37d5c9de-eb28-492f-8280-8ed7c5e432be\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " Apr 23 17:56:49.907308 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.907218 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-image-registry-private-configuration\") pod \"37d5c9de-eb28-492f-8280-8ed7c5e432be\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " Apr 23 17:56:49.907308 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.907240 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-installation-pull-secrets\") pod \"37d5c9de-eb28-492f-8280-8ed7c5e432be\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " Apr 23 17:56:49.907308 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.907270 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-trusted-ca\") pod \"37d5c9de-eb28-492f-8280-8ed7c5e432be\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " Apr 23 17:56:49.907308 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.907296 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37d5c9de-eb28-492f-8280-8ed7c5e432be-ca-trust-extracted\") pod \"37d5c9de-eb28-492f-8280-8ed7c5e432be\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " Apr 23 17:56:49.908070 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.907767 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-certificates\") pod \"37d5c9de-eb28-492f-8280-8ed7c5e432be\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " Apr 23 17:56:49.908070 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.907818 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-bound-sa-token\") pod \"37d5c9de-eb28-492f-8280-8ed7c5e432be\" (UID: \"37d5c9de-eb28-492f-8280-8ed7c5e432be\") " Apr 23 17:56:49.908070 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.907816 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "37d5c9de-eb28-492f-8280-8ed7c5e432be" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:49.908288 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.908108 2570 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-trusted-ca\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:49.908288 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.908112 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "37d5c9de-eb28-492f-8280-8ed7c5e432be" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:49.909903 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.909861 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "37d5c9de-eb28-492f-8280-8ed7c5e432be" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:49.910267 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.910242 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "37d5c9de-eb28-492f-8280-8ed7c5e432be" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:49.910553 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.910527 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "37d5c9de-eb28-492f-8280-8ed7c5e432be" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:49.910627 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.910605 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "37d5c9de-eb28-492f-8280-8ed7c5e432be" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:49.911679 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.911659 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-kube-api-access-7tt5q" (OuterVolumeSpecName: "kube-api-access-7tt5q") pod "37d5c9de-eb28-492f-8280-8ed7c5e432be" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be"). InnerVolumeSpecName "kube-api-access-7tt5q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:49.916585 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:49.916561 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d5c9de-eb28-492f-8280-8ed7c5e432be-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "37d5c9de-eb28-492f-8280-8ed7c5e432be" (UID: "37d5c9de-eb28-492f-8280-8ed7c5e432be"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:56:50.009260 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.009180 2570 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-tls\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:50.009260 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.009218 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7tt5q\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-kube-api-access-7tt5q\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:50.009260 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.009229 2570 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-image-registry-private-configuration\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:50.009260 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.009240 2570 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37d5c9de-eb28-492f-8280-8ed7c5e432be-installation-pull-secrets\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:50.009260 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.009248 2570 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37d5c9de-eb28-492f-8280-8ed7c5e432be-ca-trust-extracted\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:50.009260 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.009257 2570 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37d5c9de-eb28-492f-8280-8ed7c5e432be-registry-certificates\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:50.009260 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.009265 2570 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37d5c9de-eb28-492f-8280-8ed7c5e432be-bound-sa-token\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:56:50.306265 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.306193 2570 generic.go:358] "Generic (PLEG): container finished" podID="37d5c9de-eb28-492f-8280-8ed7c5e432be" containerID="a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9" exitCode=0 Apr 23 17:56:50.306265 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.306256 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" Apr 23 17:56:50.306742 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.306278 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" event={"ID":"37d5c9de-eb28-492f-8280-8ed7c5e432be","Type":"ContainerDied","Data":"a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9"} Apr 23 17:56:50.306742 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.306316 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-69557c4bff-g4kkh" event={"ID":"37d5c9de-eb28-492f-8280-8ed7c5e432be","Type":"ContainerDied","Data":"44f0a5feb01841e3ab9b11f8134d65724c55044948dc458a1b3bce283cd0c11b"} Apr 23 17:56:50.306742 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.306332 2570 scope.go:117] "RemoveContainer" containerID="a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9" Apr 23 17:56:50.307806 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.307783 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-mlz2c" event={"ID":"90608d2c-b6cd-4fca-968e-9fc7cbf593f8","Type":"ContainerStarted","Data":"c95cbcea00e9057b55595d374cc5d6a43c4781c04cbba0cd7bf1b886fd0ed611"} Apr 23 17:56:50.315029 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.315011 2570 scope.go:117] "RemoveContainer" containerID="a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9" Apr 23 17:56:50.315276 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:56:50.315257 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9\": container with ID starting with a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9 not found: ID does not exist" containerID="a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9" Apr 23 17:56:50.315331 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.315284 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9"} err="failed to get container status \"a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9\": rpc error: code = NotFound desc = could not find container \"a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9\": container with ID starting with a460dd9f0d029a59c5a1c5a50b14992e0f0ca5cce1034fc6613ee4be0d7963e9 not found: ID does not exist" Apr 23 17:56:50.325239 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.325197 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-mlz2c" podStartSLOduration=251.886877543 podStartE2EDuration="4m15.325185664s" podCreationTimestamp="2026-04-23 17:52:35 +0000 UTC" firstStartedPulling="2026-04-23 17:56:46.302836263 +0000 UTC m=+262.242850386" lastFinishedPulling="2026-04-23 17:56:49.741144384 +0000 UTC m=+265.681158507" observedRunningTime="2026-04-23 17:56:50.324228381 +0000 UTC m=+266.264242522" watchObservedRunningTime="2026-04-23 17:56:50.325185664 +0000 UTC m=+266.265199803" Apr 23 17:56:50.340426 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.340389 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-69557c4bff-g4kkh"] Apr 23 17:56:50.343039 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.343016 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-69557c4bff-g4kkh"] Apr 23 17:56:50.676535 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:56:50.676506 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d5c9de-eb28-492f-8280-8ed7c5e432be" path="/var/lib/kubelet/pods/37d5c9de-eb28-492f-8280-8ed7c5e432be/volumes" Apr 23 17:57:07.359570 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:07.359531 2570 generic.go:358] "Generic (PLEG): container finished" podID="25d8ee46-11e2-460e-9417-93a2def9b519" containerID="17c11b91eb75e6ce71488922b5f6450795a7d37b4c552ebf73e32b932de85680" exitCode=0 Apr 23 17:57:07.359975 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:07.359606 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" event={"ID":"25d8ee46-11e2-460e-9417-93a2def9b519","Type":"ContainerDied","Data":"17c11b91eb75e6ce71488922b5f6450795a7d37b4c552ebf73e32b932de85680"} Apr 23 17:57:07.359975 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:07.359928 2570 scope.go:117] "RemoveContainer" containerID="17c11b91eb75e6ce71488922b5f6450795a7d37b4c552ebf73e32b932de85680" Apr 23 17:57:08.363620 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:08.363586 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-vtfgg" event={"ID":"25d8ee46-11e2-460e-9417-93a2def9b519","Type":"ContainerStarted","Data":"eae475abd3abbd8bdeed2aa7e4cb8936ffdc13fb1b05b7bbeb89fecbb59c8843"} Apr 23 17:57:11.866498 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:11.866454 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:57:11.869012 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:11.868990 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c339fa02-2445-42c2-b7ee-5388fb338129-metrics-certs\") pod \"network-metrics-daemon-757d7\" (UID: \"c339fa02-2445-42c2-b7ee-5388fb338129\") " pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:57:11.993147 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:11.993118 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-s85dw\"" Apr 23 17:57:12.001836 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:12.001811 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-757d7" Apr 23 17:57:12.122589 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:12.122528 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-757d7"] Apr 23 17:57:12.125199 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:57:12.125170 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc339fa02_2445_42c2_b7ee_5388fb338129.slice/crio-b01a283dc222695e8b3904819f86ba4aee8cd849a8c320c88dd4abdcadcc79ce WatchSource:0}: Error finding container b01a283dc222695e8b3904819f86ba4aee8cd849a8c320c88dd4abdcadcc79ce: Status 404 returned error can't find the container with id b01a283dc222695e8b3904819f86ba4aee8cd849a8c320c88dd4abdcadcc79ce Apr 23 17:57:12.375813 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:12.375754 2570 generic.go:358] "Generic (PLEG): container finished" podID="d78b9e26-7f79-4f68-ad87-4e237752c338" containerID="fb860307adc429b12dd3cdfc0eedace3ad8ae76d9f79ed260c7ee7bf35706bed" exitCode=0 Apr 23 17:57:12.375926 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:12.375821 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" event={"ID":"d78b9e26-7f79-4f68-ad87-4e237752c338","Type":"ContainerDied","Data":"fb860307adc429b12dd3cdfc0eedace3ad8ae76d9f79ed260c7ee7bf35706bed"} Apr 23 17:57:12.376132 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:12.376114 2570 scope.go:117] "RemoveContainer" containerID="fb860307adc429b12dd3cdfc0eedace3ad8ae76d9f79ed260c7ee7bf35706bed" Apr 23 17:57:12.376962 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:12.376940 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-757d7" event={"ID":"c339fa02-2445-42c2-b7ee-5388fb338129","Type":"ContainerStarted","Data":"b01a283dc222695e8b3904819f86ba4aee8cd849a8c320c88dd4abdcadcc79ce"} Apr 23 17:57:13.381086 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:13.381016 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-d6fc45fc5-9t7hk" event={"ID":"d78b9e26-7f79-4f68-ad87-4e237752c338","Type":"ContainerStarted","Data":"6899ab203c79a1dde8bb07b000252894db4d608f2870b9df4f15c408135483da"} Apr 23 17:57:13.382667 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:13.382644 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-757d7" event={"ID":"c339fa02-2445-42c2-b7ee-5388fb338129","Type":"ContainerStarted","Data":"2a877b409e3965f4bb62a203a7ea8d819646d3b8972065856fa283698589438a"} Apr 23 17:57:13.382766 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:13.382671 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-757d7" event={"ID":"c339fa02-2445-42c2-b7ee-5388fb338129","Type":"ContainerStarted","Data":"8b1ed65a991983f8e2223a1941fcbb46f337b38bb902d0c0dec91b61b5711e71"} Apr 23 17:57:13.409903 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:13.409856 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-757d7" podStartSLOduration=129.576984838 podStartE2EDuration="2m10.409842155s" podCreationTimestamp="2026-04-23 17:55:03 +0000 UTC" firstStartedPulling="2026-04-23 17:57:12.127044347 +0000 UTC m=+288.067058466" lastFinishedPulling="2026-04-23 17:57:12.959901665 +0000 UTC m=+288.899915783" observedRunningTime="2026-04-23 17:57:13.407757207 +0000 UTC m=+289.347771348" watchObservedRunningTime="2026-04-23 17:57:13.409842155 +0000 UTC m=+289.349856299" Apr 23 17:57:17.398848 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:17.398811 2570 generic.go:358] "Generic (PLEG): container finished" podID="1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba" containerID="cf19b10d7cd3b5a2e34ae0e79360c258afea6e89cf7ebb2d8f348a794397dfb9" exitCode=0 Apr 23 17:57:17.399298 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:17.398886 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-hd2z2" event={"ID":"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba","Type":"ContainerDied","Data":"cf19b10d7cd3b5a2e34ae0e79360c258afea6e89cf7ebb2d8f348a794397dfb9"} Apr 23 17:57:17.399368 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:17.399306 2570 scope.go:117] "RemoveContainer" containerID="cf19b10d7cd3b5a2e34ae0e79360c258afea6e89cf7ebb2d8f348a794397dfb9" Apr 23 17:57:18.404548 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:18.404517 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-hd2z2" event={"ID":"1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba","Type":"ContainerStarted","Data":"6e2e5aad2dc99c82c6067f8511206d4208585cf5ef4828f34232bdeb2140162d"} Apr 23 17:57:24.550826 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:24.550791 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 17:57:24.551290 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:24.550791 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 17:57:24.560523 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:24.560459 2570 kubelet.go:1628] "Image garbage collection succeeded" Apr 23 17:57:29.452226 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.452191 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg"] Apr 23 17:57:29.454601 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.452484 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="37d5c9de-eb28-492f-8280-8ed7c5e432be" containerName="registry" Apr 23 17:57:29.454601 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.452496 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d5c9de-eb28-492f-8280-8ed7c5e432be" containerName="registry" Apr 23 17:57:29.454601 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.452545 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="37d5c9de-eb28-492f-8280-8ed7c5e432be" containerName="registry" Apr 23 17:57:29.455510 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.455493 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.457475 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.457444 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 23 17:57:29.457630 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.457444 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 23 17:57:29.457876 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.457858 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 23 17:57:29.457963 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.457892 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 23 17:57:29.461713 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.461692 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg"] Apr 23 17:57:29.595721 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.595692 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbc5r\" (UniqueName: \"kubernetes.io/projected/4413448b-9e74-46fb-a8f3-e93742dd1b08-kube-api-access-zbc5r\") pod \"klusterlet-addon-workmgr-548477fb4c-7nvrg\" (UID: \"4413448b-9e74-46fb-a8f3-e93742dd1b08\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.595721 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.595723 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/4413448b-9e74-46fb-a8f3-e93742dd1b08-klusterlet-config\") pod \"klusterlet-addon-workmgr-548477fb4c-7nvrg\" (UID: \"4413448b-9e74-46fb-a8f3-e93742dd1b08\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.595940 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.595800 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4413448b-9e74-46fb-a8f3-e93742dd1b08-tmp\") pod \"klusterlet-addon-workmgr-548477fb4c-7nvrg\" (UID: \"4413448b-9e74-46fb-a8f3-e93742dd1b08\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.696649 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.696625 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4413448b-9e74-46fb-a8f3-e93742dd1b08-tmp\") pod \"klusterlet-addon-workmgr-548477fb4c-7nvrg\" (UID: \"4413448b-9e74-46fb-a8f3-e93742dd1b08\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.696752 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.696662 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbc5r\" (UniqueName: \"kubernetes.io/projected/4413448b-9e74-46fb-a8f3-e93742dd1b08-kube-api-access-zbc5r\") pod \"klusterlet-addon-workmgr-548477fb4c-7nvrg\" (UID: \"4413448b-9e74-46fb-a8f3-e93742dd1b08\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.696752 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.696689 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/4413448b-9e74-46fb-a8f3-e93742dd1b08-klusterlet-config\") pod \"klusterlet-addon-workmgr-548477fb4c-7nvrg\" (UID: \"4413448b-9e74-46fb-a8f3-e93742dd1b08\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.697089 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.697067 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4413448b-9e74-46fb-a8f3-e93742dd1b08-tmp\") pod \"klusterlet-addon-workmgr-548477fb4c-7nvrg\" (UID: \"4413448b-9e74-46fb-a8f3-e93742dd1b08\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.699398 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.699354 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/4413448b-9e74-46fb-a8f3-e93742dd1b08-klusterlet-config\") pod \"klusterlet-addon-workmgr-548477fb4c-7nvrg\" (UID: \"4413448b-9e74-46fb-a8f3-e93742dd1b08\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.707524 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.705958 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbc5r\" (UniqueName: \"kubernetes.io/projected/4413448b-9e74-46fb-a8f3-e93742dd1b08-kube-api-access-zbc5r\") pod \"klusterlet-addon-workmgr-548477fb4c-7nvrg\" (UID: \"4413448b-9e74-46fb-a8f3-e93742dd1b08\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.765870 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.765845 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:29.889942 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.889838 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg"] Apr 23 17:57:29.892562 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:57:29.892534 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4413448b_9e74_46fb_a8f3_e93742dd1b08.slice/crio-4222f22b7ff417b2c3e6fe5eb275db70bce602bed644fb4294919d43eb7e6b9a WatchSource:0}: Error finding container 4222f22b7ff417b2c3e6fe5eb275db70bce602bed644fb4294919d43eb7e6b9a: Status 404 returned error can't find the container with id 4222f22b7ff417b2c3e6fe5eb275db70bce602bed644fb4294919d43eb7e6b9a Apr 23 17:57:29.894628 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:29.894610 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:57:30.440689 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:30.440646 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" event={"ID":"4413448b-9e74-46fb-a8f3-e93742dd1b08","Type":"ContainerStarted","Data":"4222f22b7ff417b2c3e6fe5eb275db70bce602bed644fb4294919d43eb7e6b9a"} Apr 23 17:57:33.449603 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:33.449516 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" event={"ID":"4413448b-9e74-46fb-a8f3-e93742dd1b08","Type":"ContainerStarted","Data":"e021af537aaf314b07ce1e34d37740e10b36a60443e783ca7db3d84eef5b3dbf"} Apr 23 17:57:33.449968 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:33.449697 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:33.451316 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:33.451296 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" Apr 23 17:57:33.465147 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:57:33.465100 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-548477fb4c-7nvrg" podStartSLOduration=1.168881986 podStartE2EDuration="4.465087495s" podCreationTimestamp="2026-04-23 17:57:29 +0000 UTC" firstStartedPulling="2026-04-23 17:57:29.894817041 +0000 UTC m=+305.834831168" lastFinishedPulling="2026-04-23 17:57:33.191022554 +0000 UTC m=+309.131036677" observedRunningTime="2026-04-23 17:57:33.463588148 +0000 UTC m=+309.403602289" watchObservedRunningTime="2026-04-23 17:57:33.465087495 +0000 UTC m=+309.405101634" Apr 23 17:58:47.940774 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:47.940735 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq"] Apr 23 17:58:47.943344 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:47.943325 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:47.945567 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:47.945537 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 23 17:58:47.945567 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:47.945560 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-t68sr\"" Apr 23 17:58:47.945748 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:47.945691 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 23 17:58:47.951752 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:47.951723 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq"] Apr 23 17:58:48.024048 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.024023 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:48.024150 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.024052 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qls9f\" (UniqueName: \"kubernetes.io/projected/047e67fc-25e5-4419-87f2-36ff56e8a4d9-kube-api-access-qls9f\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:48.024150 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.024097 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:48.125097 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.125070 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:48.125196 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.125123 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:48.125253 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.125233 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qls9f\" (UniqueName: \"kubernetes.io/projected/047e67fc-25e5-4419-87f2-36ff56e8a4d9-kube-api-access-qls9f\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:48.125506 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.125486 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:48.125552 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.125501 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:48.133981 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.133956 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qls9f\" (UniqueName: \"kubernetes.io/projected/047e67fc-25e5-4419-87f2-36ff56e8a4d9-kube-api-access-qls9f\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:48.254400 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.254344 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:58:48.384579 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.384549 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq"] Apr 23 17:58:48.390387 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:58:48.390360 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod047e67fc_25e5_4419_87f2_36ff56e8a4d9.slice/crio-af53e60638b98a2b7c7cc7655b8ddae3f6df8b8e2e2c3411cb498223a2191609 WatchSource:0}: Error finding container af53e60638b98a2b7c7cc7655b8ddae3f6df8b8e2e2c3411cb498223a2191609: Status 404 returned error can't find the container with id af53e60638b98a2b7c7cc7655b8ddae3f6df8b8e2e2c3411cb498223a2191609 Apr 23 17:58:48.667322 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:48.667215 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" event={"ID":"047e67fc-25e5-4419-87f2-36ff56e8a4d9","Type":"ContainerStarted","Data":"af53e60638b98a2b7c7cc7655b8ddae3f6df8b8e2e2c3411cb498223a2191609"} Apr 23 17:58:53.687957 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:53.687929 2570 generic.go:358] "Generic (PLEG): container finished" podID="047e67fc-25e5-4419-87f2-36ff56e8a4d9" containerID="1503bb5c706435fac04f2ec59098422a7f05233a32585b898421ef0d7855127f" exitCode=0 Apr 23 17:58:53.688290 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:53.687966 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" event={"ID":"047e67fc-25e5-4419-87f2-36ff56e8a4d9","Type":"ContainerDied","Data":"1503bb5c706435fac04f2ec59098422a7f05233a32585b898421ef0d7855127f"} Apr 23 17:58:56.697551 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:56.697513 2570 generic.go:358] "Generic (PLEG): container finished" podID="047e67fc-25e5-4419-87f2-36ff56e8a4d9" containerID="0965a042141b98ba8adb5520e458f0e2931ac05075eb12177a92dc65ab30188f" exitCode=0 Apr 23 17:58:56.697909 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:58:56.697570 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" event={"ID":"047e67fc-25e5-4419-87f2-36ff56e8a4d9","Type":"ContainerDied","Data":"0965a042141b98ba8adb5520e458f0e2931ac05075eb12177a92dc65ab30188f"} Apr 23 17:59:02.723550 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:02.723511 2570 generic.go:358] "Generic (PLEG): container finished" podID="047e67fc-25e5-4419-87f2-36ff56e8a4d9" containerID="568ef9e3ad2e2508608f2bce536904a86e1a6eb7bf921ac988fc3b4eb9efde50" exitCode=0 Apr 23 17:59:02.723887 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:02.723605 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" event={"ID":"047e67fc-25e5-4419-87f2-36ff56e8a4d9","Type":"ContainerDied","Data":"568ef9e3ad2e2508608f2bce536904a86e1a6eb7bf921ac988fc3b4eb9efde50"} Apr 23 17:59:03.845108 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:03.845087 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:59:03.951605 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:03.951577 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qls9f\" (UniqueName: \"kubernetes.io/projected/047e67fc-25e5-4419-87f2-36ff56e8a4d9-kube-api-access-qls9f\") pod \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " Apr 23 17:59:03.951734 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:03.951615 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-bundle\") pod \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " Apr 23 17:59:03.951734 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:03.951682 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-util\") pod \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\" (UID: \"047e67fc-25e5-4419-87f2-36ff56e8a4d9\") " Apr 23 17:59:03.952172 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:03.952141 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-bundle" (OuterVolumeSpecName: "bundle") pod "047e67fc-25e5-4419-87f2-36ff56e8a4d9" (UID: "047e67fc-25e5-4419-87f2-36ff56e8a4d9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:59:03.953665 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:03.953644 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/047e67fc-25e5-4419-87f2-36ff56e8a4d9-kube-api-access-qls9f" (OuterVolumeSpecName: "kube-api-access-qls9f") pod "047e67fc-25e5-4419-87f2-36ff56e8a4d9" (UID: "047e67fc-25e5-4419-87f2-36ff56e8a4d9"). InnerVolumeSpecName "kube-api-access-qls9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:59:03.956871 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:03.956849 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-util" (OuterVolumeSpecName: "util") pod "047e67fc-25e5-4419-87f2-36ff56e8a4d9" (UID: "047e67fc-25e5-4419-87f2-36ff56e8a4d9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:59:04.052872 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:04.052821 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qls9f\" (UniqueName: \"kubernetes.io/projected/047e67fc-25e5-4419-87f2-36ff56e8a4d9-kube-api-access-qls9f\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:59:04.052872 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:04.052844 2570 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-bundle\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:59:04.052872 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:04.052854 2570 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/047e67fc-25e5-4419-87f2-36ff56e8a4d9-util\") on node \"ip-10-0-141-209.ec2.internal\" DevicePath \"\"" Apr 23 17:59:04.729960 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:04.729931 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" event={"ID":"047e67fc-25e5-4419-87f2-36ff56e8a4d9","Type":"ContainerDied","Data":"af53e60638b98a2b7c7cc7655b8ddae3f6df8b8e2e2c3411cb498223a2191609"} Apr 23 17:59:04.729960 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:04.729961 2570 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af53e60638b98a2b7c7cc7655b8ddae3f6df8b8e2e2c3411cb498223a2191609" Apr 23 17:59:04.730166 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:04.730012 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cwhglq" Apr 23 17:59:10.749214 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.749184 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq"] Apr 23 17:59:10.749603 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.749460 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="047e67fc-25e5-4419-87f2-36ff56e8a4d9" containerName="extract" Apr 23 17:59:10.749603 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.749472 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="047e67fc-25e5-4419-87f2-36ff56e8a4d9" containerName="extract" Apr 23 17:59:10.749603 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.749487 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="047e67fc-25e5-4419-87f2-36ff56e8a4d9" containerName="util" Apr 23 17:59:10.749603 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.749493 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="047e67fc-25e5-4419-87f2-36ff56e8a4d9" containerName="util" Apr 23 17:59:10.749603 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.749504 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="047e67fc-25e5-4419-87f2-36ff56e8a4d9" containerName="pull" Apr 23 17:59:10.749603 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.749510 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="047e67fc-25e5-4419-87f2-36ff56e8a4d9" containerName="pull" Apr 23 17:59:10.749603 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.749565 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="047e67fc-25e5-4419-87f2-36ff56e8a4d9" containerName="extract" Apr 23 17:59:10.781867 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.781840 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq"] Apr 23 17:59:10.781986 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.781881 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" Apr 23 17:59:10.786977 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.786940 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 23 17:59:10.786977 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.786950 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"custom-metrics-autoscaler-operator-dockercfg-25vm9\"" Apr 23 17:59:10.786977 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.786975 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 23 17:59:10.787398 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.787376 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 23 17:59:10.901604 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.901583 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdm8d\" (UniqueName: \"kubernetes.io/projected/ce9eea56-0292-4499-91cf-f1f46f429db7-kube-api-access-cdm8d\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq\" (UID: \"ce9eea56-0292-4499-91cf-f1f46f429db7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" Apr 23 17:59:10.901723 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:10.901618 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/ce9eea56-0292-4499-91cf-f1f46f429db7-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq\" (UID: \"ce9eea56-0292-4499-91cf-f1f46f429db7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" Apr 23 17:59:11.002454 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:11.002394 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/ce9eea56-0292-4499-91cf-f1f46f429db7-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq\" (UID: \"ce9eea56-0292-4499-91cf-f1f46f429db7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" Apr 23 17:59:11.002532 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:11.002489 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cdm8d\" (UniqueName: \"kubernetes.io/projected/ce9eea56-0292-4499-91cf-f1f46f429db7-kube-api-access-cdm8d\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq\" (UID: \"ce9eea56-0292-4499-91cf-f1f46f429db7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" Apr 23 17:59:11.004652 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:11.004634 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/ce9eea56-0292-4499-91cf-f1f46f429db7-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq\" (UID: \"ce9eea56-0292-4499-91cf-f1f46f429db7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" Apr 23 17:59:11.013075 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:11.013052 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdm8d\" (UniqueName: \"kubernetes.io/projected/ce9eea56-0292-4499-91cf-f1f46f429db7-kube-api-access-cdm8d\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq\" (UID: \"ce9eea56-0292-4499-91cf-f1f46f429db7\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" Apr 23 17:59:11.093579 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:11.093558 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" Apr 23 17:59:11.224045 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:11.223902 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq"] Apr 23 17:59:11.227065 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:59:11.227030 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce9eea56_0292_4499_91cf_f1f46f429db7.slice/crio-0892098113cb3762052752a3911c718fc013466b4ef7e37069e3405693e65c3e WatchSource:0}: Error finding container 0892098113cb3762052752a3911c718fc013466b4ef7e37069e3405693e65c3e: Status 404 returned error can't find the container with id 0892098113cb3762052752a3911c718fc013466b4ef7e37069e3405693e65c3e Apr 23 17:59:11.752090 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:11.752052 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" event={"ID":"ce9eea56-0292-4499-91cf-f1f46f429db7","Type":"ContainerStarted","Data":"0892098113cb3762052752a3911c718fc013466b4ef7e37069e3405693e65c3e"} Apr 23 17:59:15.226335 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.226294 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-plpzc"] Apr 23 17:59:15.257112 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.257084 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-plpzc"] Apr 23 17:59:15.257241 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.257207 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:15.260388 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.260369 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 23 17:59:15.261229 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.261207 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-certs\"" Apr 23 17:59:15.261981 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.261960 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-vc4gb\"" Apr 23 17:59:15.445032 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.439666 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/613604cb-9426-48da-9a39-2ddfdb8db5f8-cabundle0\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:15.445032 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.439733 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s954f\" (UniqueName: \"kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-kube-api-access-s954f\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:15.445032 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.439797 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:15.540980 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.540900 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/613604cb-9426-48da-9a39-2ddfdb8db5f8-cabundle0\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:15.540980 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.540946 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s954f\" (UniqueName: \"kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-kube-api-access-s954f\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:15.541146 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.540999 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:15.541195 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:15.541151 2570 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:59:15.541195 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:15.541168 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:59:15.541195 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:15.541180 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-plpzc: references non-existent secret key: ca.crt Apr 23 17:59:15.541310 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:15.541244 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates podName:613604cb-9426-48da-9a39-2ddfdb8db5f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:59:16.041223955 +0000 UTC m=+411.981238077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates") pod "keda-operator-ffbb595cb-plpzc" (UID: "613604cb-9426-48da-9a39-2ddfdb8db5f8") : references non-existent secret key: ca.crt Apr 23 17:59:15.541843 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.541822 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/613604cb-9426-48da-9a39-2ddfdb8db5f8-cabundle0\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:15.549306 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.549287 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s954f\" (UniqueName: \"kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-kube-api-access-s954f\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:15.574521 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.574491 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75"] Apr 23 17:59:15.599240 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.599213 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75"] Apr 23 17:59:15.599323 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.599308 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:15.601457 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.601406 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 23 17:59:15.743341 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.743311 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/caf8dbed-b151-48ad-8caf-fd144c541423-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:15.743497 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.743378 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:15.743497 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.743430 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bppb2\" (UniqueName: \"kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-kube-api-access-bppb2\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:15.769650 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.769618 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" event={"ID":"ce9eea56-0292-4499-91cf-f1f46f429db7","Type":"ContainerStarted","Data":"890c147c75fcee8c01d50b251f0f9ca521a5658636fab5e9d2b9d18ebab213bb"} Apr 23 17:59:15.769855 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.769745 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" Apr 23 17:59:15.803531 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.803435 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" podStartSLOduration=2.373010999 podStartE2EDuration="5.803405232s" podCreationTimestamp="2026-04-23 17:59:10 +0000 UTC" firstStartedPulling="2026-04-23 17:59:11.229489685 +0000 UTC m=+407.169503808" lastFinishedPulling="2026-04-23 17:59:14.659883922 +0000 UTC m=+410.599898041" observedRunningTime="2026-04-23 17:59:15.800383351 +0000 UTC m=+411.740397491" watchObservedRunningTime="2026-04-23 17:59:15.803405232 +0000 UTC m=+411.743419379" Apr 23 17:59:15.843750 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.843728 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/caf8dbed-b151-48ad-8caf-fd144c541423-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:15.843898 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.843877 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:15.843969 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.843935 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bppb2\" (UniqueName: \"kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-kube-api-access-bppb2\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:15.844030 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:15.844002 2570 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:59:15.844030 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:15.844026 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:59:15.844130 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:15.844046 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75: references non-existent secret key: tls.crt Apr 23 17:59:15.844130 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:15.844101 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates podName:caf8dbed-b151-48ad-8caf-fd144c541423 nodeName:}" failed. No retries permitted until 2026-04-23 17:59:16.344085174 +0000 UTC m=+412.284099305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates") pod "keda-metrics-apiserver-7c9f485588-ctd75" (UID: "caf8dbed-b151-48ad-8caf-fd144c541423") : references non-existent secret key: tls.crt Apr 23 17:59:15.844215 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.844143 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/caf8dbed-b151-48ad-8caf-fd144c541423-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:15.857335 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.857315 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bppb2\" (UniqueName: \"kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-kube-api-access-bppb2\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:15.943085 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.943064 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-mfscx"] Apr 23 17:59:15.965895 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.965874 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-mfscx"] Apr 23 17:59:15.965982 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.965970 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-mfscx" Apr 23 17:59:15.968161 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:15.968136 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 23 17:59:16.045870 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.045842 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:16.045988 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:16.045973 2570 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:59:16.046036 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:16.045991 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:59:16.046036 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:16.046002 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-plpzc: references non-existent secret key: ca.crt Apr 23 17:59:16.046117 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:16.046049 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates podName:613604cb-9426-48da-9a39-2ddfdb8db5f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:59:17.046035322 +0000 UTC m=+412.986049440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates") pod "keda-operator-ffbb595cb-plpzc" (UID: "613604cb-9426-48da-9a39-2ddfdb8db5f8") : references non-existent secret key: ca.crt Apr 23 17:59:16.146626 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.146579 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/e124e2bd-3abb-4238-9b23-6949524a0257-certificates\") pod \"keda-admission-cf49989db-mfscx\" (UID: \"e124e2bd-3abb-4238-9b23-6949524a0257\") " pod="openshift-keda/keda-admission-cf49989db-mfscx" Apr 23 17:59:16.146626 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.146609 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52plf\" (UniqueName: \"kubernetes.io/projected/e124e2bd-3abb-4238-9b23-6949524a0257-kube-api-access-52plf\") pod \"keda-admission-cf49989db-mfscx\" (UID: \"e124e2bd-3abb-4238-9b23-6949524a0257\") " pod="openshift-keda/keda-admission-cf49989db-mfscx" Apr 23 17:59:16.247138 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.247118 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/e124e2bd-3abb-4238-9b23-6949524a0257-certificates\") pod \"keda-admission-cf49989db-mfscx\" (UID: \"e124e2bd-3abb-4238-9b23-6949524a0257\") " pod="openshift-keda/keda-admission-cf49989db-mfscx" Apr 23 17:59:16.247439 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.247146 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-52plf\" (UniqueName: \"kubernetes.io/projected/e124e2bd-3abb-4238-9b23-6949524a0257-kube-api-access-52plf\") pod \"keda-admission-cf49989db-mfscx\" (UID: \"e124e2bd-3abb-4238-9b23-6949524a0257\") " pod="openshift-keda/keda-admission-cf49989db-mfscx" Apr 23 17:59:16.249300 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.249282 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/e124e2bd-3abb-4238-9b23-6949524a0257-certificates\") pod \"keda-admission-cf49989db-mfscx\" (UID: \"e124e2bd-3abb-4238-9b23-6949524a0257\") " pod="openshift-keda/keda-admission-cf49989db-mfscx" Apr 23 17:59:16.254285 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.254266 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-52plf\" (UniqueName: \"kubernetes.io/projected/e124e2bd-3abb-4238-9b23-6949524a0257-kube-api-access-52plf\") pod \"keda-admission-cf49989db-mfscx\" (UID: \"e124e2bd-3abb-4238-9b23-6949524a0257\") " pod="openshift-keda/keda-admission-cf49989db-mfscx" Apr 23 17:59:16.277236 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.277216 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-mfscx" Apr 23 17:59:16.347608 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.347580 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:16.347790 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:16.347770 2570 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:59:16.347845 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:16.347797 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:59:16.347845 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:16.347818 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75: references non-existent secret key: tls.crt Apr 23 17:59:16.347911 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:16.347886 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates podName:caf8dbed-b151-48ad-8caf-fd144c541423 nodeName:}" failed. No retries permitted until 2026-04-23 17:59:17.347865836 +0000 UTC m=+413.287879956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates") pod "keda-metrics-apiserver-7c9f485588-ctd75" (UID: "caf8dbed-b151-48ad-8caf-fd144c541423") : references non-existent secret key: tls.crt Apr 23 17:59:16.406137 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.406114 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-mfscx"] Apr 23 17:59:16.408692 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:59:16.408666 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode124e2bd_3abb_4238_9b23_6949524a0257.slice/crio-68280d2550a8c702423e8f66f9352196a3d46089755d5200826c040c877525d4 WatchSource:0}: Error finding container 68280d2550a8c702423e8f66f9352196a3d46089755d5200826c040c877525d4: Status 404 returned error can't find the container with id 68280d2550a8c702423e8f66f9352196a3d46089755d5200826c040c877525d4 Apr 23 17:59:16.773121 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:16.773091 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-mfscx" event={"ID":"e124e2bd-3abb-4238-9b23-6949524a0257","Type":"ContainerStarted","Data":"68280d2550a8c702423e8f66f9352196a3d46089755d5200826c040c877525d4"} Apr 23 17:59:17.053997 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:17.053924 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:17.054150 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:17.054121 2570 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:59:17.054150 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:17.054137 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:59:17.054150 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:17.054148 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-plpzc: references non-existent secret key: ca.crt Apr 23 17:59:17.054299 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:17.054201 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates podName:613604cb-9426-48da-9a39-2ddfdb8db5f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:59:19.054182196 +0000 UTC m=+414.994196334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates") pod "keda-operator-ffbb595cb-plpzc" (UID: "613604cb-9426-48da-9a39-2ddfdb8db5f8") : references non-existent secret key: ca.crt Apr 23 17:59:17.358111 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:17.358032 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:17.358531 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:17.358145 2570 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:59:17.358531 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:17.358164 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:59:17.358531 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:17.358192 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75: references non-existent secret key: tls.crt Apr 23 17:59:17.358531 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:17.358257 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates podName:caf8dbed-b151-48ad-8caf-fd144c541423 nodeName:}" failed. No retries permitted until 2026-04-23 17:59:19.358239415 +0000 UTC m=+415.298253534 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates") pod "keda-metrics-apiserver-7c9f485588-ctd75" (UID: "caf8dbed-b151-48ad-8caf-fd144c541423") : references non-existent secret key: tls.crt Apr 23 17:59:17.777675 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:17.777643 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-mfscx" event={"ID":"e124e2bd-3abb-4238-9b23-6949524a0257","Type":"ContainerStarted","Data":"a049e3c009fbc6ca1d5b568195ec63c66ae694c42c1410cd62fa60d72b901d75"} Apr 23 17:59:17.777799 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:17.777754 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-mfscx" Apr 23 17:59:17.794514 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:17.794473 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-mfscx" podStartSLOduration=1.590055022 podStartE2EDuration="2.794462133s" podCreationTimestamp="2026-04-23 17:59:15 +0000 UTC" firstStartedPulling="2026-04-23 17:59:16.409940101 +0000 UTC m=+412.349954219" lastFinishedPulling="2026-04-23 17:59:17.614347204 +0000 UTC m=+413.554361330" observedRunningTime="2026-04-23 17:59:17.793240257 +0000 UTC m=+413.733254396" watchObservedRunningTime="2026-04-23 17:59:17.794462133 +0000 UTC m=+413.734476274" Apr 23 17:59:19.070009 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:19.069978 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:19.070460 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:19.070098 2570 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:59:19.070460 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:19.070115 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:59:19.070460 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:19.070123 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-plpzc: references non-existent secret key: ca.crt Apr 23 17:59:19.070460 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:19.070171 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates podName:613604cb-9426-48da-9a39-2ddfdb8db5f8 nodeName:}" failed. No retries permitted until 2026-04-23 17:59:23.070157001 +0000 UTC m=+419.010171118 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates") pod "keda-operator-ffbb595cb-plpzc" (UID: "613604cb-9426-48da-9a39-2ddfdb8db5f8") : references non-existent secret key: ca.crt Apr 23 17:59:19.371456 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:19.371398 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:19.371561 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:19.371510 2570 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:59:19.371561 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:19.371526 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:59:19.371561 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:19.371540 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75: references non-existent secret key: tls.crt Apr 23 17:59:19.371662 ip-10-0-141-209 kubenswrapper[2570]: E0423 17:59:19.371581 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates podName:caf8dbed-b151-48ad-8caf-fd144c541423 nodeName:}" failed. No retries permitted until 2026-04-23 17:59:23.371569133 +0000 UTC m=+419.311583250 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates") pod "keda-metrics-apiserver-7c9f485588-ctd75" (UID: "caf8dbed-b151-48ad-8caf-fd144c541423") : references non-existent secret key: tls.crt Apr 23 17:59:23.094878 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:23.094841 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:23.097318 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:23.097296 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/613604cb-9426-48da-9a39-2ddfdb8db5f8-certificates\") pod \"keda-operator-ffbb595cb-plpzc\" (UID: \"613604cb-9426-48da-9a39-2ddfdb8db5f8\") " pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:23.367631 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:23.367559 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:23.397167 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:23.397135 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:23.399589 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:23.399559 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/caf8dbed-b151-48ad-8caf-fd144c541423-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ctd75\" (UID: \"caf8dbed-b151-48ad-8caf-fd144c541423\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:23.409471 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:23.409350 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:23.537465 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:23.537435 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-plpzc"] Apr 23 17:59:23.541157 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:59:23.541128 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod613604cb_9426_48da_9a39_2ddfdb8db5f8.slice/crio-091a2a60165219cc186fcacde88628a10fa3c628f9c99bb86189901ce20ee627 WatchSource:0}: Error finding container 091a2a60165219cc186fcacde88628a10fa3c628f9c99bb86189901ce20ee627: Status 404 returned error can't find the container with id 091a2a60165219cc186fcacde88628a10fa3c628f9c99bb86189901ce20ee627 Apr 23 17:59:23.570629 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:23.570608 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75"] Apr 23 17:59:23.573192 ip-10-0-141-209 kubenswrapper[2570]: W0423 17:59:23.573165 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaf8dbed_b151_48ad_8caf_fd144c541423.slice/crio-e51ee58c0d55f48fa5e6e0539c1538922781479fe6ae00f79ae13fc9d72b480e WatchSource:0}: Error finding container e51ee58c0d55f48fa5e6e0539c1538922781479fe6ae00f79ae13fc9d72b480e: Status 404 returned error can't find the container with id e51ee58c0d55f48fa5e6e0539c1538922781479fe6ae00f79ae13fc9d72b480e Apr 23 17:59:23.795275 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:23.795244 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-plpzc" event={"ID":"613604cb-9426-48da-9a39-2ddfdb8db5f8","Type":"ContainerStarted","Data":"091a2a60165219cc186fcacde88628a10fa3c628f9c99bb86189901ce20ee627"} Apr 23 17:59:23.796144 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:23.796122 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" event={"ID":"caf8dbed-b151-48ad-8caf-fd144c541423","Type":"ContainerStarted","Data":"e51ee58c0d55f48fa5e6e0539c1538922781479fe6ae00f79ae13fc9d72b480e"} Apr 23 17:59:27.813706 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:27.813663 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-plpzc" event={"ID":"613604cb-9426-48da-9a39-2ddfdb8db5f8","Type":"ContainerStarted","Data":"bdda5788c0bac7b7a3d009b9552fc7e67790fc2a84eeba67845dfe3356d3fb97"} Apr 23 17:59:27.813706 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:27.813711 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 17:59:27.815077 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:27.815055 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" event={"ID":"caf8dbed-b151-48ad-8caf-fd144c541423","Type":"ContainerStarted","Data":"9391bd85cc57c9c75c3db187d110b95d41e7cb22308af66be23a6055c586892c"} Apr 23 17:59:27.815205 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:27.815167 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:27.830278 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:27.830235 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-operator-ffbb595cb-plpzc" podStartSLOduration=9.193079782 podStartE2EDuration="12.830219257s" podCreationTimestamp="2026-04-23 17:59:15 +0000 UTC" firstStartedPulling="2026-04-23 17:59:23.542645668 +0000 UTC m=+419.482659785" lastFinishedPulling="2026-04-23 17:59:27.179785139 +0000 UTC m=+423.119799260" observedRunningTime="2026-04-23 17:59:27.82883926 +0000 UTC m=+423.768853400" watchObservedRunningTime="2026-04-23 17:59:27.830219257 +0000 UTC m=+423.770233396" Apr 23 17:59:27.845249 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:27.845206 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" podStartSLOduration=9.24351198 podStartE2EDuration="12.845194468s" podCreationTimestamp="2026-04-23 17:59:15 +0000 UTC" firstStartedPulling="2026-04-23 17:59:23.574452326 +0000 UTC m=+419.514466457" lastFinishedPulling="2026-04-23 17:59:27.176134822 +0000 UTC m=+423.116148945" observedRunningTime="2026-04-23 17:59:27.844717106 +0000 UTC m=+423.784731246" watchObservedRunningTime="2026-04-23 17:59:27.845194468 +0000 UTC m=+423.785208608" Apr 23 17:59:36.776503 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:36.776467 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-gh2bq" Apr 23 17:59:38.783241 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:38.783204 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-mfscx" Apr 23 17:59:38.823165 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:38.823126 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ctd75" Apr 23 17:59:48.821559 ip-10-0-141-209 kubenswrapper[2570]: I0423 17:59:48.821518 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-plpzc" Apr 23 18:00:21.512080 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.512046 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l"] Apr 23 18:00:21.515787 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.515767 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" Apr 23 18:00:21.517613 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.517592 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-webhook-server-cert\"" Apr 23 18:00:21.518250 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.518230 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 23 18:00:21.518745 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.518717 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-controller-manager-dockercfg-grwxx\"" Apr 23 18:00:21.518821 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.518717 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 23 18:00:21.521956 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.521935 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l"] Apr 23 18:00:21.535304 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.535279 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-86cc847c5c-ws688"] Apr 23 18:00:21.538706 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.538684 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-ws688" Apr 23 18:00:21.540940 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.540924 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 23 18:00:21.541028 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.540946 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-td2rr\"" Apr 23 18:00:21.544891 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.544873 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-ws688"] Apr 23 18:00:21.606079 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.606052 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c3320332-0da7-4f06-825a-e7c0d6f944eb-data\") pod \"seaweedfs-86cc847c5c-ws688\" (UID: \"c3320332-0da7-4f06-825a-e7c0d6f944eb\") " pod="kserve/seaweedfs-86cc847c5c-ws688" Apr 23 18:00:21.606195 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.606105 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c57rj\" (UniqueName: \"kubernetes.io/projected/85461369-9b65-43ee-a980-6a0a2902cc7d-kube-api-access-c57rj\") pod \"llmisvc-controller-manager-68cc5db7c4-b4x7l\" (UID: \"85461369-9b65-43ee-a980-6a0a2902cc7d\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" Apr 23 18:00:21.606195 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.606130 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbswv\" (UniqueName: \"kubernetes.io/projected/c3320332-0da7-4f06-825a-e7c0d6f944eb-kube-api-access-dbswv\") pod \"seaweedfs-86cc847c5c-ws688\" (UID: \"c3320332-0da7-4f06-825a-e7c0d6f944eb\") " pod="kserve/seaweedfs-86cc847c5c-ws688" Apr 23 18:00:21.606195 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.606169 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85461369-9b65-43ee-a980-6a0a2902cc7d-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-b4x7l\" (UID: \"85461369-9b65-43ee-a980-6a0a2902cc7d\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" Apr 23 18:00:21.707502 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.707478 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c3320332-0da7-4f06-825a-e7c0d6f944eb-data\") pod \"seaweedfs-86cc847c5c-ws688\" (UID: \"c3320332-0da7-4f06-825a-e7c0d6f944eb\") " pod="kserve/seaweedfs-86cc847c5c-ws688" Apr 23 18:00:21.707608 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.707527 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c57rj\" (UniqueName: \"kubernetes.io/projected/85461369-9b65-43ee-a980-6a0a2902cc7d-kube-api-access-c57rj\") pod \"llmisvc-controller-manager-68cc5db7c4-b4x7l\" (UID: \"85461369-9b65-43ee-a980-6a0a2902cc7d\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" Apr 23 18:00:21.707608 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.707556 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dbswv\" (UniqueName: \"kubernetes.io/projected/c3320332-0da7-4f06-825a-e7c0d6f944eb-kube-api-access-dbswv\") pod \"seaweedfs-86cc847c5c-ws688\" (UID: \"c3320332-0da7-4f06-825a-e7c0d6f944eb\") " pod="kserve/seaweedfs-86cc847c5c-ws688" Apr 23 18:00:21.707608 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.707584 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85461369-9b65-43ee-a980-6a0a2902cc7d-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-b4x7l\" (UID: \"85461369-9b65-43ee-a980-6a0a2902cc7d\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" Apr 23 18:00:21.707885 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.707863 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c3320332-0da7-4f06-825a-e7c0d6f944eb-data\") pod \"seaweedfs-86cc847c5c-ws688\" (UID: \"c3320332-0da7-4f06-825a-e7c0d6f944eb\") " pod="kserve/seaweedfs-86cc847c5c-ws688" Apr 23 18:00:21.709841 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.709818 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85461369-9b65-43ee-a980-6a0a2902cc7d-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-b4x7l\" (UID: \"85461369-9b65-43ee-a980-6a0a2902cc7d\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" Apr 23 18:00:21.715694 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.715673 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbswv\" (UniqueName: \"kubernetes.io/projected/c3320332-0da7-4f06-825a-e7c0d6f944eb-kube-api-access-dbswv\") pod \"seaweedfs-86cc847c5c-ws688\" (UID: \"c3320332-0da7-4f06-825a-e7c0d6f944eb\") " pod="kserve/seaweedfs-86cc847c5c-ws688" Apr 23 18:00:21.715788 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.715745 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c57rj\" (UniqueName: \"kubernetes.io/projected/85461369-9b65-43ee-a980-6a0a2902cc7d-kube-api-access-c57rj\") pod \"llmisvc-controller-manager-68cc5db7c4-b4x7l\" (UID: \"85461369-9b65-43ee-a980-6a0a2902cc7d\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" Apr 23 18:00:21.827249 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.827187 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" Apr 23 18:00:21.849917 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.849893 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-ws688" Apr 23 18:00:21.966474 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.964187 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l"] Apr 23 18:00:21.967305 ip-10-0-141-209 kubenswrapper[2570]: W0423 18:00:21.967280 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod85461369_9b65_43ee_a980_6a0a2902cc7d.slice/crio-979c824cf2d082fdab3ee77be1e4beab890001121a39651de78c6acbe1e5619e WatchSource:0}: Error finding container 979c824cf2d082fdab3ee77be1e4beab890001121a39651de78c6acbe1e5619e: Status 404 returned error can't find the container with id 979c824cf2d082fdab3ee77be1e4beab890001121a39651de78c6acbe1e5619e Apr 23 18:00:21.980622 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:21.980599 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-ws688"] Apr 23 18:00:21.984220 ip-10-0-141-209 kubenswrapper[2570]: W0423 18:00:21.984195 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3320332_0da7_4f06_825a_e7c0d6f944eb.slice/crio-cb9d4702bdc632a77611486e4aa74f12005552d9c6aee5d31cb50ac4e8b93a4a WatchSource:0}: Error finding container cb9d4702bdc632a77611486e4aa74f12005552d9c6aee5d31cb50ac4e8b93a4a: Status 404 returned error can't find the container with id cb9d4702bdc632a77611486e4aa74f12005552d9c6aee5d31cb50ac4e8b93a4a Apr 23 18:00:22.017426 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:22.017383 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-ws688" event={"ID":"c3320332-0da7-4f06-825a-e7c0d6f944eb","Type":"ContainerStarted","Data":"cb9d4702bdc632a77611486e4aa74f12005552d9c6aee5d31cb50ac4e8b93a4a"} Apr 23 18:00:22.018469 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:22.018442 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" event={"ID":"85461369-9b65-43ee-a980-6a0a2902cc7d","Type":"ContainerStarted","Data":"979c824cf2d082fdab3ee77be1e4beab890001121a39651de78c6acbe1e5619e"} Apr 23 18:00:25.031812 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:25.031780 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-ws688" event={"ID":"c3320332-0da7-4f06-825a-e7c0d6f944eb","Type":"ContainerStarted","Data":"523962d0c765bac26742f931b87cf8024842096dc6072f321f73217176cbe306"} Apr 23 18:00:25.032222 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:25.031899 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/seaweedfs-86cc847c5c-ws688" Apr 23 18:00:25.047329 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:25.047280 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-86cc847c5c-ws688" podStartSLOduration=1.577831607 podStartE2EDuration="4.047266189s" podCreationTimestamp="2026-04-23 18:00:21 +0000 UTC" firstStartedPulling="2026-04-23 18:00:21.985458385 +0000 UTC m=+477.925472502" lastFinishedPulling="2026-04-23 18:00:24.454892966 +0000 UTC m=+480.394907084" observedRunningTime="2026-04-23 18:00:25.046741733 +0000 UTC m=+480.986755870" watchObservedRunningTime="2026-04-23 18:00:25.047266189 +0000 UTC m=+480.987280312" Apr 23 18:00:31.036996 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:31.036963 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/seaweedfs-86cc847c5c-ws688" Apr 23 18:00:42.097642 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:42.097605 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" event={"ID":"85461369-9b65-43ee-a980-6a0a2902cc7d","Type":"ContainerStarted","Data":"c1622ca4bfd127c80f6cee0742f4ff91aac72b6dcfc150e24277fe7ac4fe6bce"} Apr 23 18:00:42.098022 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:42.097710 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" Apr 23 18:00:42.114442 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:00:42.114381 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" podStartSLOduration=2.066616052 podStartE2EDuration="21.114369026s" podCreationTimestamp="2026-04-23 18:00:21 +0000 UTC" firstStartedPulling="2026-04-23 18:00:21.968910849 +0000 UTC m=+477.908924967" lastFinishedPulling="2026-04-23 18:00:41.01666382 +0000 UTC m=+496.956677941" observedRunningTime="2026-04-23 18:00:42.113409604 +0000 UTC m=+498.053423744" watchObservedRunningTime="2026-04-23 18:00:42.114369026 +0000 UTC m=+498.054383169" Apr 23 18:01:13.101870 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:13.101830 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-b4x7l" Apr 23 18:01:48.154292 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.154253 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/odh-model-controller-696fc77849-rncdc"] Apr 23 18:01:48.157337 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.157315 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:01:48.159741 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.159721 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-dockercfg-gclj2\"" Apr 23 18:01:48.159741 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.159733 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-webhook-cert\"" Apr 23 18:01:48.165855 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.165833 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-rncdc"] Apr 23 18:01:48.220226 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.220201 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b5dt\" (UniqueName: \"kubernetes.io/projected/44b89321-8c0d-460f-8069-57c2b230be36-kube-api-access-6b5dt\") pod \"odh-model-controller-696fc77849-rncdc\" (UID: \"44b89321-8c0d-460f-8069-57c2b230be36\") " pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:01:48.220312 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.220246 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44b89321-8c0d-460f-8069-57c2b230be36-cert\") pod \"odh-model-controller-696fc77849-rncdc\" (UID: \"44b89321-8c0d-460f-8069-57c2b230be36\") " pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:01:48.321403 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.321383 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6b5dt\" (UniqueName: \"kubernetes.io/projected/44b89321-8c0d-460f-8069-57c2b230be36-kube-api-access-6b5dt\") pod \"odh-model-controller-696fc77849-rncdc\" (UID: \"44b89321-8c0d-460f-8069-57c2b230be36\") " pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:01:48.321513 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.321435 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44b89321-8c0d-460f-8069-57c2b230be36-cert\") pod \"odh-model-controller-696fc77849-rncdc\" (UID: \"44b89321-8c0d-460f-8069-57c2b230be36\") " pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:01:48.321581 ip-10-0-141-209 kubenswrapper[2570]: E0423 18:01:48.321523 2570 secret.go:189] Couldn't get secret kserve/odh-model-controller-webhook-cert: secret "odh-model-controller-webhook-cert" not found Apr 23 18:01:48.321636 ip-10-0-141-209 kubenswrapper[2570]: E0423 18:01:48.321583 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b89321-8c0d-460f-8069-57c2b230be36-cert podName:44b89321-8c0d-460f-8069-57c2b230be36 nodeName:}" failed. No retries permitted until 2026-04-23 18:01:48.821565417 +0000 UTC m=+564.761579537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44b89321-8c0d-460f-8069-57c2b230be36-cert") pod "odh-model-controller-696fc77849-rncdc" (UID: "44b89321-8c0d-460f-8069-57c2b230be36") : secret "odh-model-controller-webhook-cert" not found Apr 23 18:01:48.329693 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.329666 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b5dt\" (UniqueName: \"kubernetes.io/projected/44b89321-8c0d-460f-8069-57c2b230be36-kube-api-access-6b5dt\") pod \"odh-model-controller-696fc77849-rncdc\" (UID: \"44b89321-8c0d-460f-8069-57c2b230be36\") " pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:01:48.825514 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.825487 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44b89321-8c0d-460f-8069-57c2b230be36-cert\") pod \"odh-model-controller-696fc77849-rncdc\" (UID: \"44b89321-8c0d-460f-8069-57c2b230be36\") " pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:01:48.827809 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:48.827785 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44b89321-8c0d-460f-8069-57c2b230be36-cert\") pod \"odh-model-controller-696fc77849-rncdc\" (UID: \"44b89321-8c0d-460f-8069-57c2b230be36\") " pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:01:49.069196 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:49.069169 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:01:49.196693 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:49.196673 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-rncdc"] Apr 23 18:01:49.198899 ip-10-0-141-209 kubenswrapper[2570]: W0423 18:01:49.198863 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44b89321_8c0d_460f_8069_57c2b230be36.slice/crio-a6a294e3f4eadb914fbd99e873704b0ab34af955e90f9747ad8cb5bf0dfd8d08 WatchSource:0}: Error finding container a6a294e3f4eadb914fbd99e873704b0ab34af955e90f9747ad8cb5bf0dfd8d08: Status 404 returned error can't find the container with id a6a294e3f4eadb914fbd99e873704b0ab34af955e90f9747ad8cb5bf0dfd8d08 Apr 23 18:01:49.336461 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:49.336388 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-rncdc" event={"ID":"44b89321-8c0d-460f-8069-57c2b230be36","Type":"ContainerStarted","Data":"a6a294e3f4eadb914fbd99e873704b0ab34af955e90f9747ad8cb5bf0dfd8d08"} Apr 23 18:01:52.349109 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:52.349070 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-rncdc" event={"ID":"44b89321-8c0d-460f-8069-57c2b230be36","Type":"ContainerStarted","Data":"9f1beab19345bb4122904238991c62562bf24453b5ec777d70e5e4ec3beda461"} Apr 23 18:01:52.349550 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:52.349150 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:01:52.369556 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:01:52.369504 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/odh-model-controller-696fc77849-rncdc" podStartSLOduration=1.8568985329999999 podStartE2EDuration="4.369491915s" podCreationTimestamp="2026-04-23 18:01:48 +0000 UTC" firstStartedPulling="2026-04-23 18:01:49.200171 +0000 UTC m=+565.140185118" lastFinishedPulling="2026-04-23 18:01:51.71276438 +0000 UTC m=+567.652778500" observedRunningTime="2026-04-23 18:01:52.367682097 +0000 UTC m=+568.307696236" watchObservedRunningTime="2026-04-23 18:01:52.369491915 +0000 UTC m=+568.309506055" Apr 23 18:02:03.355047 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:02:03.355009 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/odh-model-controller-696fc77849-rncdc" Apr 23 18:02:24.577433 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:02:24.577390 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 18:02:24.577975 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:02:24.577858 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 18:07:24.598170 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:07:24.598142 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 18:07:24.600791 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:07:24.600769 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 18:12:24.619023 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:12:24.618995 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 18:12:24.622986 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:12:24.622964 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 18:16:19.709095 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:19.708994 2570 ???:1] "http: TLS handshake error from 10.0.141.209:60918: EOF" Apr 23 18:16:19.710810 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:19.710774 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-mlz2c_90608d2c-b6cd-4fca-968e-9fc7cbf593f8/global-pull-secret-syncer/0.log" Apr 23 18:16:19.926395 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:19.926365 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-t7tbj_ed7906bd-d603-4e6e-8e63-369f394f24b0/konnectivity-agent/0.log" Apr 23 18:16:19.996157 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:19.996089 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-141-209.ec2.internal_7d4522ab95c28bfad158f8d5f296881d/haproxy/0.log" Apr 23 18:16:23.380458 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:23.380368 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-75587bd455-7hdwh_dcab8057-5f29-4508-9628-e8ee8882286b/cluster-monitoring-operator/0.log" Apr 23 18:16:23.700777 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:23.700754 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-725tl_8e4641e7-2db7-4e32-bcc3-f0998ca0f38e/node-exporter/0.log" Apr 23 18:16:23.723301 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:23.723282 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-725tl_8e4641e7-2db7-4e32-bcc3-f0998ca0f38e/kube-rbac-proxy/0.log" Apr 23 18:16:23.746272 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:23.746251 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-725tl_8e4641e7-2db7-4e32-bcc3-f0998ca0f38e/init-textfile/0.log" Apr 23 18:16:25.455699 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:25.455669 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-cb95c66f6-2xz9v_72bc8f52-06f5-4c26-b9dc-1db461cbb3cd/networking-console-plugin/0.log" Apr 23 18:16:25.902084 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:25.902005 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/2.log" Apr 23 18:16:25.906362 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:25.906338 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-dzhrv_3f006f82-b551-4a19-b684-091814d45d54/console-operator/3.log" Apr 23 18:16:26.608819 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.608781 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht"] Apr 23 18:16:26.611268 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.611250 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.613328 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.613307 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-46lhs\"/\"openshift-service-ca.crt\"" Apr 23 18:16:26.613840 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.613816 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-46lhs\"/\"default-dockercfg-g7l5f\"" Apr 23 18:16:26.613973 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.613815 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-46lhs\"/\"kube-root-ca.crt\"" Apr 23 18:16:26.616833 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.616806 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht"] Apr 23 18:16:26.689035 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.689005 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-lib-modules\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.689152 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.689039 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x88xf\" (UniqueName: \"kubernetes.io/projected/3dfdc786-62d0-4372-846f-f9ac5736c2ba-kube-api-access-x88xf\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.689152 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.689092 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-proc\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.689266 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.689158 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-sys\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.689266 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.689184 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-podres\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.728168 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.728143 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_volume-data-source-validator-7c6cbb6c87-2t8xp_128cf630-366d-4caf-a72e-dec2e74c92ba/volume-data-source-validator/0.log" Apr 23 18:16:26.789927 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.789907 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-proc\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.790036 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.789942 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-sys\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.790036 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.789961 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-podres\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.790036 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.790002 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-lib-modules\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.790036 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.790026 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x88xf\" (UniqueName: \"kubernetes.io/projected/3dfdc786-62d0-4372-846f-f9ac5736c2ba-kube-api-access-x88xf\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.790036 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.790029 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-proc\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.790237 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.790041 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-sys\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.790237 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.790101 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-podres\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.790315 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.790249 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dfdc786-62d0-4372-846f-f9ac5736c2ba-lib-modules\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.797752 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.797730 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x88xf\" (UniqueName: \"kubernetes.io/projected/3dfdc786-62d0-4372-846f-f9ac5736c2ba-kube-api-access-x88xf\") pod \"perf-node-gather-daemonset-wkzht\" (UID: \"3dfdc786-62d0-4372-846f-f9ac5736c2ba\") " pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:26.923522 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:26.923498 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:27.039665 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:27.039630 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht"] Apr 23 18:16:27.042941 ip-10-0-141-209 kubenswrapper[2570]: W0423 18:16:27.042915 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3dfdc786_62d0_4372_846f_f9ac5736c2ba.slice/crio-537d1b9e39eca5d17f1c96c71463e3d7484a77938567ac8ced862b360ee7e001 WatchSource:0}: Error finding container 537d1b9e39eca5d17f1c96c71463e3d7484a77938567ac8ced862b360ee7e001: Status 404 returned error can't find the container with id 537d1b9e39eca5d17f1c96c71463e3d7484a77938567ac8ced862b360ee7e001 Apr 23 18:16:27.044557 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:27.044540 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:16:27.458276 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:27.458243 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" event={"ID":"3dfdc786-62d0-4372-846f-f9ac5736c2ba","Type":"ContainerStarted","Data":"ceeb3787502ac02f3a5d9a336c39b8a2ed483d333f5672fc97bac21c3822047f"} Apr 23 18:16:27.458448 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:27.458279 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" event={"ID":"3dfdc786-62d0-4372-846f-f9ac5736c2ba","Type":"ContainerStarted","Data":"537d1b9e39eca5d17f1c96c71463e3d7484a77938567ac8ced862b360ee7e001"} Apr 23 18:16:27.458448 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:27.458359 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:27.476607 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:27.476565 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" podStartSLOduration=1.4765533450000001 podStartE2EDuration="1.476553345s" podCreationTimestamp="2026-04-23 18:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:16:27.475046763 +0000 UTC m=+1443.415060927" watchObservedRunningTime="2026-04-23 18:16:27.476553345 +0000 UTC m=+1443.416567485" Apr 23 18:16:27.499884 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:27.499865 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-mfdbd_912b083f-5a59-4119-91fd-47ba37c5ed53/dns/0.log" Apr 23 18:16:27.523499 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:27.523479 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-mfdbd_912b083f-5a59-4119-91fd-47ba37c5ed53/kube-rbac-proxy/0.log" Apr 23 18:16:27.619099 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:27.619069 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-ktkpp_4a1d3fd4-24e2-48b9-866d-797b76b07e9e/dns-node-resolver/0.log" Apr 23 18:16:28.067639 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:28.067616 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-7nkkq_65fafb1b-d34c-41dc-86f9-c06f2cf0487e/node-ca/0.log" Apr 23 18:16:28.887380 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:28.887351 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-64b76f7768-4tktg_f9384d4c-b454-4102-b989-7bd167cee9f4/router/0.log" Apr 23 18:16:29.233513 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:29.233482 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-c94n2_0da25b59-22b1-4319-b795-3e7d2bc7db04/serve-healthcheck-canary/0.log" Apr 23 18:16:29.633677 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:29.633609 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-585dfdc468-hd2z2_1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba/insights-operator/0.log" Apr 23 18:16:29.634009 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:29.633991 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-585dfdc468-hd2z2_1c3ce08c-9d65-4f82-9c9d-40c3d82c4bba/insights-operator/1.log" Apr 23 18:16:29.657903 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:29.657886 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-6np7g_39d1dfb2-3f50-4093-890b-69aef045ebb2/kube-rbac-proxy/0.log" Apr 23 18:16:29.681708 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:29.681692 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-6np7g_39d1dfb2-3f50-4093-890b-69aef045ebb2/exporter/0.log" Apr 23 18:16:29.703704 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:29.703683 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-6np7g_39d1dfb2-3f50-4093-890b-69aef045ebb2/extractor/0.log" Apr 23 18:16:31.773846 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:31.773808 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_llmisvc-controller-manager-68cc5db7c4-b4x7l_85461369-9b65-43ee-a980-6a0a2902cc7d/manager/0.log" Apr 23 18:16:31.884887 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:31.884864 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_odh-model-controller-696fc77849-rncdc_44b89321-8c0d-460f-8069-57c2b230be36/manager/0.log" Apr 23 18:16:31.937188 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:31.937167 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-86cc847c5c-ws688_c3320332-0da7-4f06-825a-e7c0d6f944eb/seaweedfs/0.log" Apr 23 18:16:33.472405 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:33.472377 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-46lhs/perf-node-gather-daemonset-wkzht" Apr 23 18:16:36.145796 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:36.145735 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6769c5d45-vtfgg_25d8ee46-11e2-460e-9417-93a2def9b519/kube-storage-version-migrator-operator/1.log" Apr 23 18:16:36.146813 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:36.146795 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6769c5d45-vtfgg_25d8ee46-11e2-460e-9417-93a2def9b519/kube-storage-version-migrator-operator/0.log" Apr 23 18:16:37.249631 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:37.249598 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8pvs8_60e98a1f-ca0f-4e88-883e-76057a6fcbe8/kube-multus/0.log" Apr 23 18:16:37.622111 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:37.622045 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-w6774_199c71c7-b484-42bc-b6a9-7f390ddb6768/kube-multus-additional-cni-plugins/0.log" Apr 23 18:16:37.648116 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:37.648089 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-w6774_199c71c7-b484-42bc-b6a9-7f390ddb6768/egress-router-binary-copy/0.log" Apr 23 18:16:37.671741 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:37.671719 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-w6774_199c71c7-b484-42bc-b6a9-7f390ddb6768/cni-plugins/0.log" Apr 23 18:16:37.693799 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:37.693779 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-w6774_199c71c7-b484-42bc-b6a9-7f390ddb6768/bond-cni-plugin/0.log" Apr 23 18:16:37.717752 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:37.717730 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-w6774_199c71c7-b484-42bc-b6a9-7f390ddb6768/routeoverride-cni/0.log" Apr 23 18:16:37.739351 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:37.739331 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-w6774_199c71c7-b484-42bc-b6a9-7f390ddb6768/whereabouts-cni-bincopy/0.log" Apr 23 18:16:37.765142 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:37.765124 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-w6774_199c71c7-b484-42bc-b6a9-7f390ddb6768/whereabouts-cni/0.log" Apr 23 18:16:37.792873 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:37.792853 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-757d7_c339fa02-2445-42c2-b7ee-5388fb338129/network-metrics-daemon/0.log" Apr 23 18:16:37.812772 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:37.812751 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-757d7_c339fa02-2445-42c2-b7ee-5388fb338129/kube-rbac-proxy/0.log" Apr 23 18:16:38.638626 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:38.638594 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8tv94_03c738ad-ed27-446e-9deb-dd610cedd26f/ovn-controller/0.log" Apr 23 18:16:38.664825 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:38.664802 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8tv94_03c738ad-ed27-446e-9deb-dd610cedd26f/ovn-acl-logging/0.log" Apr 23 18:16:38.684149 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:38.684113 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8tv94_03c738ad-ed27-446e-9deb-dd610cedd26f/kube-rbac-proxy-node/0.log" Apr 23 18:16:38.706676 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:38.706657 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8tv94_03c738ad-ed27-446e-9deb-dd610cedd26f/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 18:16:38.729363 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:38.729338 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8tv94_03c738ad-ed27-446e-9deb-dd610cedd26f/northd/0.log" Apr 23 18:16:38.755146 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:38.755127 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8tv94_03c738ad-ed27-446e-9deb-dd610cedd26f/nbdb/0.log" Apr 23 18:16:38.776022 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:38.776007 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8tv94_03c738ad-ed27-446e-9deb-dd610cedd26f/sbdb/0.log" Apr 23 18:16:38.853213 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:38.853192 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8tv94_03c738ad-ed27-446e-9deb-dd610cedd26f/ovnkube-controller/0.log" Apr 23 18:16:40.605795 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:40.605768 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-8894fc9bd-b5c9q_b7a4b38b-7cea-453f-b021-f118b3260fb4/check-endpoints/0.log" Apr 23 18:16:40.645099 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:40.645072 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-jhssk_2618335a-7a85-4193-a71d-15eaab4cb7f1/network-check-target-container/0.log" Apr 23 18:16:41.828262 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:41.828233 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-np628_28dae391-d055-4f84-b269-f0b74f3ff97c/iptables-alerter/0.log" Apr 23 18:16:42.577484 ip-10-0-141-209 kubenswrapper[2570]: I0423 18:16:42.577458 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-2w9gd_2d4343c2-343d-401d-99d8-75998e07c483/tuned/0.log"