Apr 17 10:15:20.568904 ip-10-0-136-48 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 17 10:15:20.568916 ip-10-0-136-48 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 17 10:15:20.568925 ip-10-0-136-48 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 17 10:15:20.569165 ip-10-0-136-48 systemd[1]: Failed to start Kubernetes Kubelet. Apr 17 10:15:30.686231 ip-10-0-136-48 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 17 10:15:30.686253 ip-10-0-136-48 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot aae7578e2be447bcb84c853cbac893a1 -- Apr 17 10:18:03.210626 ip-10-0-136-48 systemd[1]: Starting Kubernetes Kubelet... Apr 17 10:18:03.701489 ip-10-0-136-48 kubenswrapper[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 10:18:03.701489 ip-10-0-136-48 kubenswrapper[2569]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 17 10:18:03.701489 ip-10-0-136-48 kubenswrapper[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 10:18:03.701489 ip-10-0-136-48 kubenswrapper[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 10:18:03.701489 ip-10-0-136-48 kubenswrapper[2569]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 10:18:03.704605 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.704540 2569 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 10:18:03.707649 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707635 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 10:18:03.707649 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707649 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707653 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707656 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707660 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707663 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707667 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707669 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707672 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707675 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707678 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707680 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707683 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707689 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707692 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707695 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707698 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707700 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707703 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707706 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707709 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 10:18:03.707711 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707711 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707715 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707718 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707720 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707723 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707726 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707729 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707731 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707734 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707736 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707740 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707743 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707745 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707748 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707750 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707753 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707756 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707758 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707761 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707764 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 10:18:03.708183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707766 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707769 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707772 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707774 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707776 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707779 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707781 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707784 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707786 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707789 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707791 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707794 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707796 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707799 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707802 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707805 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707808 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707810 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707813 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 10:18:03.708731 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707817 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707820 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707823 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707826 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707829 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707833 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707836 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707839 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707842 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707845 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707848 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707851 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707853 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707857 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707860 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707862 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707865 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707867 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707870 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 10:18:03.709205 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707872 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707875 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707878 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707881 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707883 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707886 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.707889 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708259 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708265 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708268 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708271 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708274 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708277 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708279 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708282 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708285 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708287 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708290 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708294 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708298 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 10:18:03.709669 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708301 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708304 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708306 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708309 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708312 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708314 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708317 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708320 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708323 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708326 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708328 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708331 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708333 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708336 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708338 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708341 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708343 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708346 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708348 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 10:18:03.710151 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708351 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708368 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708371 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708373 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708376 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708379 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708381 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708384 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708387 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708390 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708393 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708396 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708398 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708401 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708404 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708407 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708409 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708412 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708415 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708417 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 10:18:03.710638 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708419 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708422 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708424 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708427 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708429 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708432 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708434 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708437 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708439 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708442 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708445 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708447 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708449 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708453 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708456 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708459 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708461 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708465 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708467 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 10:18:03.711153 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708470 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708473 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708475 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708477 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708480 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708482 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708485 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708489 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708491 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708494 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708496 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708499 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708501 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708504 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.708506 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710205 2569 flags.go:64] FLAG: --address="0.0.0.0" Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710214 2569 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710221 2569 flags.go:64] FLAG: --anonymous-auth="true" Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710225 2569 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710229 2569 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710233 2569 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 17 10:18:03.711627 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710237 2569 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710241 2569 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710244 2569 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710247 2569 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710251 2569 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710255 2569 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710258 2569 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710261 2569 flags.go:64] FLAG: --cgroup-root="" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710264 2569 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710267 2569 flags.go:64] FLAG: --client-ca-file="" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710270 2569 flags.go:64] FLAG: --cloud-config="" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710273 2569 flags.go:64] FLAG: --cloud-provider="external" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710276 2569 flags.go:64] FLAG: --cluster-dns="[]" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710280 2569 flags.go:64] FLAG: --cluster-domain="" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710283 2569 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710286 2569 flags.go:64] FLAG: --config-dir="" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710289 2569 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710292 2569 flags.go:64] FLAG: --container-log-max-files="5" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710296 2569 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710299 2569 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710302 2569 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710306 2569 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710309 2569 flags.go:64] FLAG: --contention-profiling="false" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710312 2569 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 17 10:18:03.712135 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710315 2569 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710318 2569 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710321 2569 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710325 2569 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710328 2569 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710330 2569 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710333 2569 flags.go:64] FLAG: --enable-load-reader="false" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710336 2569 flags.go:64] FLAG: --enable-server="true" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710340 2569 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710344 2569 flags.go:64] FLAG: --event-burst="100" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710347 2569 flags.go:64] FLAG: --event-qps="50" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710350 2569 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710362 2569 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710365 2569 flags.go:64] FLAG: --eviction-hard="" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710369 2569 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710372 2569 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710375 2569 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710378 2569 flags.go:64] FLAG: --eviction-soft="" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710381 2569 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710384 2569 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710387 2569 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710390 2569 flags.go:64] FLAG: --experimental-mounter-path="" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710393 2569 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710396 2569 flags.go:64] FLAG: --fail-swap-on="true" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710399 2569 flags.go:64] FLAG: --feature-gates="" Apr 17 10:18:03.712713 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710403 2569 flags.go:64] FLAG: --file-check-frequency="20s" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710405 2569 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710408 2569 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710412 2569 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710415 2569 flags.go:64] FLAG: --healthz-port="10248" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710418 2569 flags.go:64] FLAG: --help="false" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710421 2569 flags.go:64] FLAG: --hostname-override="ip-10-0-136-48.ec2.internal" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710425 2569 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710428 2569 flags.go:64] FLAG: --http-check-frequency="20s" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710431 2569 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710434 2569 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710437 2569 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710440 2569 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710444 2569 flags.go:64] FLAG: --image-service-endpoint="" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710446 2569 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710450 2569 flags.go:64] FLAG: --kube-api-burst="100" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710453 2569 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710456 2569 flags.go:64] FLAG: --kube-api-qps="50" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710459 2569 flags.go:64] FLAG: --kube-reserved="" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710461 2569 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710464 2569 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710467 2569 flags.go:64] FLAG: --kubelet-cgroups="" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710470 2569 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710473 2569 flags.go:64] FLAG: --lock-file="" Apr 17 10:18:03.713312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710476 2569 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710479 2569 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710482 2569 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710486 2569 flags.go:64] FLAG: --log-json-split-stream="false" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710489 2569 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710492 2569 flags.go:64] FLAG: --log-text-split-stream="false" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710495 2569 flags.go:64] FLAG: --logging-format="text" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710498 2569 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710501 2569 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710504 2569 flags.go:64] FLAG: --manifest-url="" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710507 2569 flags.go:64] FLAG: --manifest-url-header="" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710511 2569 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710515 2569 flags.go:64] FLAG: --max-open-files="1000000" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710519 2569 flags.go:64] FLAG: --max-pods="110" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710522 2569 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710525 2569 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710527 2569 flags.go:64] FLAG: --memory-manager-policy="None" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710530 2569 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710533 2569 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710536 2569 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710539 2569 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710547 2569 flags.go:64] FLAG: --node-status-max-images="50" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710550 2569 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710553 2569 flags.go:64] FLAG: --oom-score-adj="-999" Apr 17 10:18:03.713971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710556 2569 flags.go:64] FLAG: --pod-cidr="" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710559 2569 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710565 2569 flags.go:64] FLAG: --pod-manifest-path="" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710568 2569 flags.go:64] FLAG: --pod-max-pids="-1" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710571 2569 flags.go:64] FLAG: --pods-per-core="0" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710574 2569 flags.go:64] FLAG: --port="10250" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710577 2569 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710580 2569 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-03e29fb8ad2784c9b" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710583 2569 flags.go:64] FLAG: --qos-reserved="" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710586 2569 flags.go:64] FLAG: --read-only-port="10255" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710589 2569 flags.go:64] FLAG: --register-node="true" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710592 2569 flags.go:64] FLAG: --register-schedulable="true" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710595 2569 flags.go:64] FLAG: --register-with-taints="" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710598 2569 flags.go:64] FLAG: --registry-burst="10" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710601 2569 flags.go:64] FLAG: --registry-qps="5" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710604 2569 flags.go:64] FLAG: --reserved-cpus="" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710607 2569 flags.go:64] FLAG: --reserved-memory="" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710610 2569 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710613 2569 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710616 2569 flags.go:64] FLAG: --rotate-certificates="false" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710619 2569 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710622 2569 flags.go:64] FLAG: --runonce="false" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710625 2569 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710628 2569 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710631 2569 flags.go:64] FLAG: --seccomp-default="false" Apr 17 10:18:03.714569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710634 2569 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710637 2569 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710640 2569 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710643 2569 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710646 2569 flags.go:64] FLAG: --storage-driver-password="root" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710650 2569 flags.go:64] FLAG: --storage-driver-secure="false" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710653 2569 flags.go:64] FLAG: --storage-driver-table="stats" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710655 2569 flags.go:64] FLAG: --storage-driver-user="root" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710658 2569 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710661 2569 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710665 2569 flags.go:64] FLAG: --system-cgroups="" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710667 2569 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710672 2569 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710675 2569 flags.go:64] FLAG: --tls-cert-file="" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710678 2569 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710681 2569 flags.go:64] FLAG: --tls-min-version="" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710684 2569 flags.go:64] FLAG: --tls-private-key-file="" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710687 2569 flags.go:64] FLAG: --topology-manager-policy="none" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710690 2569 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710693 2569 flags.go:64] FLAG: --topology-manager-scope="container" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710695 2569 flags.go:64] FLAG: --v="2" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710700 2569 flags.go:64] FLAG: --version="false" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710704 2569 flags.go:64] FLAG: --vmodule="" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710708 2569 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.710711 2569 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 17 10:18:03.715158 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710805 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710808 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710811 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710814 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710818 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710821 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710823 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710826 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710828 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710831 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710833 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710836 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710839 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710841 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710844 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710847 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710850 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710854 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710856 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 10:18:03.715796 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710859 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710861 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710864 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710867 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710869 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710872 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710874 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710877 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710879 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710882 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710884 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710887 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710889 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710892 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710894 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710897 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710899 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710902 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710905 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 10:18:03.716252 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710907 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710909 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710912 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710914 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710917 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710920 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710926 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710928 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710931 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710934 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710936 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710939 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710942 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710944 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710947 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710950 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710952 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710955 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710958 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710960 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 10:18:03.716783 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710963 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710965 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710968 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710972 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710975 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710979 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710981 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710984 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710987 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710989 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710992 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710995 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.710998 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711000 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711003 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711005 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711008 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711010 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711015 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 17 10:18:03.717283 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711018 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 10:18:03.717784 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711021 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 10:18:03.717784 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711023 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 10:18:03.717784 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711025 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 10:18:03.717784 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711028 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 10:18:03.717784 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711030 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 10:18:03.717784 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711033 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 10:18:03.717784 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711035 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 10:18:03.717784 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.711038 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 10:18:03.717784 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.711693 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 10:18:03.718015 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.717933 2569 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 17 10:18:03.718015 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.717947 2569 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 10:18:03.718015 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718007 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 10:18:03.718015 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718013 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 10:18:03.718015 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718017 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718020 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718023 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718026 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718029 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718032 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718034 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718037 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718040 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718043 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718046 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718048 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718051 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718054 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718056 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718059 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718061 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718064 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718067 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718069 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 10:18:03.718146 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718072 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718074 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718077 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718080 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718083 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718086 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718090 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718093 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718096 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718099 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718101 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718104 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718106 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718109 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718111 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718114 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718116 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718119 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718122 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718124 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 10:18:03.718663 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718127 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718129 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718132 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718134 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718136 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718139 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718141 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718145 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718150 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718153 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718156 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718159 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718162 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718164 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718168 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718171 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718174 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718176 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718179 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 10:18:03.719225 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718182 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718184 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718187 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718190 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718192 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718195 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718197 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718201 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718203 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718206 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718208 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718211 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718214 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718216 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718219 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718221 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718224 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718227 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718229 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 10:18:03.719714 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718232 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718234 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718237 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718239 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718242 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718245 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.718250 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718350 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718370 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718376 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718380 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718383 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718386 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718389 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718392 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 10:18:03.720199 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718395 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718397 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718400 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718403 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718406 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718408 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718411 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718414 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718416 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718419 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718421 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718424 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718426 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718429 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718432 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718435 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718439 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718441 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718444 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 10:18:03.720581 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718446 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718449 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718452 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718454 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718457 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718459 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718462 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718464 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718467 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718470 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718472 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718475 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718478 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718480 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718483 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718485 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718488 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718490 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718493 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718496 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 10:18:03.721041 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718499 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718501 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718503 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718506 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718508 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718510 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718513 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718515 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718518 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718521 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718523 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718525 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718528 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718530 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718532 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718535 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718537 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718540 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718542 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 10:18:03.721543 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718545 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718547 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718550 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718552 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718554 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718557 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718560 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718562 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718565 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718567 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718570 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718573 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718575 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718578 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718580 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718582 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718585 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718587 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718590 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 10:18:03.721992 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:03.718592 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 10:18:03.722459 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.718597 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 10:18:03.722459 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.719406 2569 server.go:962] "Client rotation is on, will bootstrap in background" Apr 17 10:18:03.724036 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.724022 2569 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 17 10:18:03.725108 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.725096 2569 server.go:1019] "Starting client certificate rotation" Apr 17 10:18:03.725202 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.725190 2569 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 17 10:18:03.725237 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.725228 2569 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 17 10:18:03.760174 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.760153 2569 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 17 10:18:03.762841 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.762822 2569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 17 10:18:03.776836 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.776816 2569 log.go:25] "Validated CRI v1 runtime API" Apr 17 10:18:03.783389 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.783373 2569 log.go:25] "Validated CRI v1 image API" Apr 17 10:18:03.784488 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.784467 2569 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 10:18:03.788921 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.788902 2569 fs.go:135] Filesystem UUIDs: map[08443942-b5a8-4877-91c9-32ed288ff59d:/dev/nvme0n1p3 69ae0f2e-7578-4e78-9447-6dfc4227caa8:/dev/nvme0n1p4 7B77-95E7:/dev/nvme0n1p2] Apr 17 10:18:03.788998 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.788919 2569 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 17 10:18:03.790077 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.790061 2569 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 17 10:18:03.794157 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.794051 2569 manager.go:217] Machine: {Timestamp:2026-04-17 10:18:03.792685268 +0000 UTC m=+0.451439323 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3100224 MemoryCapacity:33164492800 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec21856db4bf0acbacb4d651af43d651 SystemUUID:ec21856d-b4bf-0acb-acb4-d651af43d651 BootID:aae7578e-2be4-47bc-b84c-853cbac893a1 Filesystems:[{Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:04:b5:a4:98:c5 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:04:b5:a4:98:c5 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:a6:55:eb:a3:1e:f5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164492800 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 17 10:18:03.794157 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.794149 2569 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 17 10:18:03.794285 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.794213 2569 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 17 10:18:03.795658 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.795635 2569 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 10:18:03.795802 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.795661 2569 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-136-48.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 10:18:03.795847 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.795812 2569 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 10:18:03.795847 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.795821 2569 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 10:18:03.795847 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.795833 2569 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 17 10:18:03.795928 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.795848 2569 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 17 10:18:03.797219 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.797208 2569 state_mem.go:36] "Initialized new in-memory state store" Apr 17 10:18:03.797321 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.797312 2569 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 17 10:18:03.800431 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.800421 2569 kubelet.go:491] "Attempting to sync node with API server" Apr 17 10:18:03.800462 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.800434 2569 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 10:18:03.801230 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.801222 2569 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 17 10:18:03.801264 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.801235 2569 kubelet.go:397] "Adding apiserver pod source" Apr 17 10:18:03.801264 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.801244 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 10:18:03.802265 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.802253 2569 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 17 10:18:03.802319 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.802271 2569 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 17 10:18:03.808018 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.808000 2569 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 17 10:18:03.809755 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.809733 2569 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 10:18:03.811057 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811042 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 17 10:18:03.811129 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811067 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 17 10:18:03.811129 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811076 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 17 10:18:03.811129 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811081 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 17 10:18:03.811129 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811087 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 17 10:18:03.811129 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811092 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 17 10:18:03.811129 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811098 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 17 10:18:03.811129 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811103 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 17 10:18:03.811129 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811110 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 17 10:18:03.811129 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811117 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 17 10:18:03.811129 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811129 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 17 10:18:03.811443 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.811138 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 17 10:18:03.812957 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.812946 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 17 10:18:03.812957 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.812957 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 17 10:18:03.814505 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.814485 2569 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-d6hd4" Apr 17 10:18:03.816575 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.816561 2569 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 10:18:03.816660 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.816594 2569 server.go:1295] "Started kubelet" Apr 17 10:18:03.816722 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.816677 2569 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 10:18:03.816917 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.816895 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 10:18:03.816983 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.816951 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 10:18:03.817077 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.817060 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:03.817211 ip-10-0-136-48 systemd[1]: Started Kubernetes Kubelet. Apr 17 10:18:03.817312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.817211 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 10:18:03.817312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.817260 2569 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 17 10:18:03.818214 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.818195 2569 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 10:18:03.819246 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.819232 2569 server.go:317] "Adding debug handlers to kubelet server" Apr 17 10:18:03.824263 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.824245 2569 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 17 10:18:03.824511 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.823101 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905866b451 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.816571985 +0000 UTC m=+0.475326038,LastTimestamp:2026-04-17 10:18:03.816571985 +0000 UTC m=+0.475326038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:03.824846 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.824820 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 10:18:03.825729 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.825533 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:03.825729 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.825674 2569 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 17 10:18:03.826633 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.826458 2569 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 10:18:03.826633 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.826545 2569 reconstruct.go:97] "Volume reconstruction finished" Apr 17 10:18:03.826633 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.826555 2569 reconciler.go:26] "Reconciler: start to sync state" Apr 17 10:18:03.826811 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.826643 2569 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 10:18:03.828546 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.828519 2569 factory.go:55] Registering systemd factory Apr 17 10:18:03.828633 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.828585 2569 factory.go:223] Registration of the systemd container factory successfully Apr 17 10:18:03.828990 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.828973 2569 factory.go:153] Registering CRI-O factory Apr 17 10:18:03.828990 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.828991 2569 factory.go:223] Registration of the crio container factory successfully Apr 17 10:18:03.829150 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.829030 2569 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 17 10:18:03.829150 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.829048 2569 factory.go:103] Registering Raw factory Apr 17 10:18:03.829150 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.828976 2569 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 17 10:18:03.829150 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.829061 2569 manager.go:1196] Started watching for new ooms in manager Apr 17 10:18:03.829535 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.829518 2569 manager.go:319] Starting recovery of all containers Apr 17 10:18:03.835258 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.835113 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 10:18:03.835425 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.835153 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 17 10:18:03.840528 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.840511 2569 manager.go:324] Recovery completed Apr 17 10:18:03.844246 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.844233 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:03.846458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.846434 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:03.846534 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.846468 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:03.846534 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.846481 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:03.847085 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.847068 2569 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 17 10:18:03.847085 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.847082 2569 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 17 10:18:03.847181 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.847096 2569 state_mem.go:36] "Initialized new in-memory state store" Apr 17 10:18:03.848897 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.848837 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2eaf08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846455048 +0000 UTC m=+0.505209102,LastTimestamp:2026-04-17 10:18:03.846455048 +0000 UTC m=+0.505209102,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:03.850025 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.850009 2569 policy_none.go:49] "None policy: Start" Apr 17 10:18:03.850025 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.850025 2569 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 10:18:03.850121 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.850035 2569 state_mem.go:35] "Initializing new in-memory state store" Apr 17 10:18:03.856587 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.856532 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2efe0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846475276 +0000 UTC m=+0.505229329,LastTimestamp:2026-04-17 10:18:03.846475276 +0000 UTC m=+0.505229329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:03.867761 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.867671 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2f2388 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846484872 +0000 UTC m=+0.505238928,LastTimestamp:2026-04-17 10:18:03.846484872 +0000 UTC m=+0.505238928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.891858 2569 manager.go:341] "Starting Device Plugin manager" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.891888 2569 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.891896 2569 server.go:85] "Starting device plugin registration server" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.892102 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.892114 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.892184 2569 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.892254 2569 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.892262 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.892721 2569 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.892750 2569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:03.908536 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.904593 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905d0760cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.894210765 +0000 UTC m=+0.552964819,LastTimestamp:2026-04-17 10:18:03.894210765 +0000 UTC m=+0.552964819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:03.963255 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.963201 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 10:18:03.964341 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.964323 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 10:18:03.964341 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.964345 2569 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 10:18:03.964476 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.964374 2569 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 10:18:03.964476 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.964382 2569 kubelet.go:2451] "Starting kubelet main sync loop" Apr 17 10:18:03.964476 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.964410 2569 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 17 10:18:03.973171 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:03.973147 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 10:18:03.992393 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.992379 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:03.993047 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.993030 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:03.993131 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.993060 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:03.993131 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.993071 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:03.993131 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:03.993096 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.000710 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.000640 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2eaf08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2eaf08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846455048 +0000 UTC m=+0.505209102,LastTimestamp:2026-04-17 10:18:03.993047141 +0000 UTC m=+0.651801199,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.004158 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.004139 2569 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.004158 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.004109 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2efe0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2efe0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846475276 +0000 UTC m=+0.505229329,LastTimestamp:2026-04-17 10:18:03.993065712 +0000 UTC m=+0.651819765,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.007992 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.007935 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2f2388\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2f2388 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846484872 +0000 UTC m=+0.505238928,LastTimestamp:2026-04-17 10:18:03.993075084 +0000 UTC m=+0.651829136,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.038075 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.038052 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Apr 17 10:18:04.065150 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.065132 2569 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal"] Apr 17 10:18:04.065213 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.065199 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:04.066708 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.066691 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:04.066795 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.066718 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:04.066795 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.066731 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:04.068975 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.068961 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:04.069140 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.069122 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.069223 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.069164 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:04.069679 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.069662 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:04.069766 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.069689 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:04.069766 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.069700 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:04.069766 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.069729 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:04.069766 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.069746 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:04.069766 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.069760 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:04.071941 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.071926 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.072010 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.071950 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:04.072581 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.072559 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:04.072652 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.072585 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:04.072652 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.072596 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:04.075820 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.075758 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2eaf08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2eaf08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846455048 +0000 UTC m=+0.505209102,LastTimestamp:2026-04-17 10:18:04.066706763 +0000 UTC m=+0.725460815,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.084780 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.084716 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2efe0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2efe0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846475276 +0000 UTC m=+0.505229329,LastTimestamp:2026-04-17 10:18:04.066724262 +0000 UTC m=+0.725478315,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.093000 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.092983 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.096266 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.096190 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2f2388\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2f2388 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846484872 +0000 UTC m=+0.505238928,LastTimestamp:2026-04-17 10:18:04.06673739 +0000 UTC m=+0.725491451,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.097588 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.097567 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.105464 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.105405 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2eaf08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2eaf08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846455048 +0000 UTC m=+0.505209102,LastTimestamp:2026-04-17 10:18:04.069677783 +0000 UTC m=+0.728431840,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.116645 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.116591 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2efe0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2efe0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846475276 +0000 UTC m=+0.505229329,LastTimestamp:2026-04-17 10:18:04.069694885 +0000 UTC m=+0.728448938,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.123960 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.123906 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2f2388\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2f2388 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846484872 +0000 UTC m=+0.505238928,LastTimestamp:2026-04-17 10:18:04.069703877 +0000 UTC m=+0.728457929,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.133228 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.133166 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2eaf08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2eaf08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846455048 +0000 UTC m=+0.505209102,LastTimestamp:2026-04-17 10:18:04.069739015 +0000 UTC m=+0.728493068,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.140611 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.140553 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2efe0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2efe0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846475276 +0000 UTC m=+0.505229329,LastTimestamp:2026-04-17 10:18:04.069752546 +0000 UTC m=+0.728506604,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.148482 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.148424 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2f2388\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2f2388 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846484872 +0000 UTC m=+0.505238928,LastTimestamp:2026-04-17 10:18:04.069765623 +0000 UTC m=+0.728519677,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.163054 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.162997 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2eaf08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2eaf08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846455048 +0000 UTC m=+0.505209102,LastTimestamp:2026-04-17 10:18:04.072573513 +0000 UTC m=+0.731327566,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.176609 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.176546 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2efe0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2efe0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846475276 +0000 UTC m=+0.505229329,LastTimestamp:2026-04-17 10:18:04.072589691 +0000 UTC m=+0.731343745,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.194805 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.194749 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2f2388\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2f2388 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846484872 +0000 UTC m=+0.505238928,LastTimestamp:2026-04-17 10:18:04.072602253 +0000 UTC m=+0.731356307,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.204895 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.204879 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:04.205587 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.205567 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:04.205674 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.205597 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:04.205674 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.205608 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:04.205674 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.205628 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.216161 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.216071 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2eaf08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2eaf08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846455048 +0000 UTC m=+0.505209102,LastTimestamp:2026-04-17 10:18:04.205584753 +0000 UTC m=+0.864338806,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.223970 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.223954 2569 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.224024 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.223946 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2efe0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2efe0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846475276 +0000 UTC m=+0.505229329,LastTimestamp:2026-04-17 10:18:04.205602903 +0000 UTC m=+0.864356955,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.226010 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.225959 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2f2388\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2f2388 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846484872 +0000 UTC m=+0.505238928,LastTimestamp:2026-04-17 10:18:04.205612416 +0000 UTC m=+0.864366470,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.227991 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.227974 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fe943de065f151bde50a0b04d91a20-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal\" (UID: \"89fe943de065f151bde50a0b04d91a20\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.228033 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.228000 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89fe943de065f151bde50a0b04d91a20-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal\" (UID: \"89fe943de065f151bde50a0b04d91a20\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.228033 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.228015 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/b5ac43e20b2029d4a2be3d7bfa5c6771-config\") pod \"kube-apiserver-proxy-ip-10-0-136-48.ec2.internal\" (UID: \"b5ac43e20b2029d4a2be3d7bfa5c6771\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.328658 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.328639 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fe943de065f151bde50a0b04d91a20-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal\" (UID: \"89fe943de065f151bde50a0b04d91a20\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.328704 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.328664 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fe943de065f151bde50a0b04d91a20-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal\" (UID: \"89fe943de065f151bde50a0b04d91a20\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.328704 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.328672 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89fe943de065f151bde50a0b04d91a20-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal\" (UID: \"89fe943de065f151bde50a0b04d91a20\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.328704 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.328691 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/b5ac43e20b2029d4a2be3d7bfa5c6771-config\") pod \"kube-apiserver-proxy-ip-10-0-136-48.ec2.internal\" (UID: \"b5ac43e20b2029d4a2be3d7bfa5c6771\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.328795 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.328716 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89fe943de065f151bde50a0b04d91a20-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal\" (UID: \"89fe943de065f151bde50a0b04d91a20\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.328795 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.328727 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/b5ac43e20b2029d4a2be3d7bfa5c6771-config\") pod \"kube-apiserver-proxy-ip-10-0-136-48.ec2.internal\" (UID: \"b5ac43e20b2029d4a2be3d7bfa5c6771\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.396781 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.396755 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.399420 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.399404 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.440584 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.440560 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Apr 17 10:18:04.624086 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.624057 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:04.625154 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.625135 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:04.625224 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.625169 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:04.625224 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.625180 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:04.625224 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.625204 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.634552 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.634474 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2eaf08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2eaf08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846455048 +0000 UTC m=+0.505209102,LastTimestamp:2026-04-17 10:18:04.625153767 +0000 UTC m=+1.283907821,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.642277 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.642256 2569 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:04.642416 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.642326 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-136-48.ec2.internal.18a71d905a2efe0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-136-48.ec2.internal.18a71d905a2efe0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-136-48.ec2.internal,UID:ip-10-0-136-48.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-136-48.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:03.846475276 +0000 UTC m=+0.505229329,LastTimestamp:2026-04-17 10:18:04.625174289 +0000 UTC m=+1.283928342,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.818566 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.818528 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 10:18:04.827517 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.827493 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:04.857099 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.857061 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 10:18:04.894912 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:04.894887 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5ac43e20b2029d4a2be3d7bfa5c6771.slice/crio-077d2d609d6571214977b84eb294ffb639b999d5f02572805532376ff9a0f661 WatchSource:0}: Error finding container 077d2d609d6571214977b84eb294ffb639b999d5f02572805532376ff9a0f661: Status 404 returned error can't find the container with id 077d2d609d6571214977b84eb294ffb639b999d5f02572805532376ff9a0f661 Apr 17 10:18:04.900910 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.900823 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 10:18:04.905772 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:04.905752 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89fe943de065f151bde50a0b04d91a20.slice/crio-5a2e8f4a6b9a0c07f7e77d0190bc0db407a174ca483cabe2b214693211946910 WatchSource:0}: Error finding container 5a2e8f4a6b9a0c07f7e77d0190bc0db407a174ca483cabe2b214693211946910: Status 404 returned error can't find the container with id 5a2e8f4a6b9a0c07f7e77d0190bc0db407a174ca483cabe2b214693211946910 Apr 17 10:18:04.912671 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.912599 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-136-48.ec2.internal.18a71d90990af0f2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-136-48.ec2.internal,UID:b5ac43e20b2029d4a2be3d7bfa5c6771,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\",Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:04.901077234 +0000 UTC m=+1.559831273,LastTimestamp:2026-04-17 10:18:04.901077234 +0000 UTC m=+1.559831273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.921819 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:04.921755 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d90996a1cf1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\",Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:04.907314417 +0000 UTC m=+1.566068457,LastTimestamp:2026-04-17 10:18:04.907314417 +0000 UTC m=+1.566068457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:04.968560 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.968505 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" event={"ID":"89fe943de065f151bde50a0b04d91a20","Type":"ContainerStarted","Data":"5a2e8f4a6b9a0c07f7e77d0190bc0db407a174ca483cabe2b214693211946910"} Apr 17 10:18:04.969430 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:04.969409 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal" event={"ID":"b5ac43e20b2029d4a2be3d7bfa5c6771","Type":"ContainerStarted","Data":"077d2d609d6571214977b84eb294ffb639b999d5f02572805532376ff9a0f661"} Apr 17 10:18:05.097787 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:05.097760 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 10:18:05.248634 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:05.248566 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Apr 17 10:18:05.443154 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:05.443121 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:05.447078 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:05.446343 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:05.447078 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:05.446398 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:05.447078 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:05.446413 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:05.447078 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:05.446446 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:05.464537 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:05.464502 2569 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:05.499238 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:05.499174 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 10:18:05.826195 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:05.826167 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:06.554559 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:06.554489 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-136-48.ec2.internal.18a71d90faf6f768 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-136-48.ec2.internal,UID:b5ac43e20b2029d4a2be3d7bfa5c6771,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\" in 1.642s (1.642s including waiting). Image size: 488332864 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:06.543935336 +0000 UTC m=+3.202689381,LastTimestamp:2026-04-17 10:18:06.543935336 +0000 UTC m=+3.202689381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:06.563464 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:06.563369 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d90fb224d34 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" in 1.639s (1.639s including waiting). Image size: 468435751 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:06.546775348 +0000 UTC m=+3.205529392,LastTimestamp:2026-04-17 10:18:06.546775348 +0000 UTC m=+3.205529392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:06.630173 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:06.630090 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-136-48.ec2.internal.18a71d90ff7dde1a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-136-48.ec2.internal,UID:b5ac43e20b2029d4a2be3d7bfa5c6771,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Created,Message:Created container: haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:06.619885082 +0000 UTC m=+3.278639138,LastTimestamp:2026-04-17 10:18:06.619885082 +0000 UTC m=+3.278639138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:06.637205 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:06.637128 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-136-48.ec2.internal.18a71d90ffe3a757 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-136-48.ec2.internal,UID:b5ac43e20b2029d4a2be3d7bfa5c6771,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Started,Message:Started container haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:06.626555735 +0000 UTC m=+3.285309791,LastTimestamp:2026-04-17 10:18:06.626555735 +0000 UTC m=+3.285309791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:06.808330 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:06.808246 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 10:18:06.827277 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:06.827254 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:06.857386 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:06.857343 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Apr 17 10:18:06.896768 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:06.896741 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 10:18:06.974375 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:06.974333 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal" event={"ID":"b5ac43e20b2029d4a2be3d7bfa5c6771","Type":"ContainerStarted","Data":"4c5c790d7cf3da887f4e2d460d49174e084bed4a66ddbd0151ea019cc2796103"} Apr 17 10:18:06.974514 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:06.974428 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:06.975328 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:06.975306 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:06.975458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:06.975343 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:06.975458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:06.975376 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:06.975565 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:06.975543 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:07.065432 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.065202 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:07.066252 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.066236 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:07.066306 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.066268 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:07.066306 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.066283 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:07.066388 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.066309 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:07.070692 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:07.070232 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d9119de1c81 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:07.062400129 +0000 UTC m=+3.721154185,LastTimestamp:2026-04-17 10:18:07.062400129 +0000 UTC m=+3.721154185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:07.077154 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:07.077132 2569 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:07.077269 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:07.077204 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d911a71da0e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:07.072082446 +0000 UTC m=+3.730836704,LastTimestamp:2026-04-17 10:18:07.072082446 +0000 UTC m=+3.730836704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:07.724152 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:07.724122 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 10:18:07.826255 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.826234 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:07.977497 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.977416 2569 generic.go:358] "Generic (PLEG): container finished" podID="89fe943de065f151bde50a0b04d91a20" containerID="510797fa99f8e5626a3ee31b77b08575f6a47d18154ef297b69f63c2db8f5adb" exitCode=0 Apr 17 10:18:07.977808 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.977508 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:07.977808 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.977507 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" event={"ID":"89fe943de065f151bde50a0b04d91a20","Type":"ContainerDied","Data":"510797fa99f8e5626a3ee31b77b08575f6a47d18154ef297b69f63c2db8f5adb"} Apr 17 10:18:07.977808 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.977525 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:07.978457 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.978440 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:07.978550 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.978466 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:07.978550 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.978476 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:07.978550 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.978443 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:07.978550 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.978542 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:07.978737 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:07.978557 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:07.978737 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:07.978654 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:07.978836 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:07.978738 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:07.989906 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:07.989806 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d915099d764 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:07.980672868 +0000 UTC m=+4.639426927,LastTimestamp:2026-04-17 10:18:07.980672868 +0000 UTC m=+4.639426927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:08.092077 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:08.092004 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d9156b06dbf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:08.082816447 +0000 UTC m=+4.741570491,LastTimestamp:2026-04-17 10:18:08.082816447 +0000 UTC m=+4.741570491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:08.100967 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:08.100900 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d91572e1ab9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:08.091052729 +0000 UTC m=+4.749806791,LastTimestamp:2026-04-17 10:18:08.091052729 +0000 UTC m=+4.749806791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:08.118084 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:08.118054 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 10:18:08.825587 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:08.825562 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:08.979838 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:08.979811 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/0.log" Apr 17 10:18:08.980195 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:08.980055 2569 generic.go:358] "Generic (PLEG): container finished" podID="89fe943de065f151bde50a0b04d91a20" containerID="570dc6aa87e1813f66321b1cd7925c078d2dd21dfa0f32bb207b89bc0558acc9" exitCode=1 Apr 17 10:18:08.980195 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:08.980083 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" event={"ID":"89fe943de065f151bde50a0b04d91a20","Type":"ContainerDied","Data":"570dc6aa87e1813f66321b1cd7925c078d2dd21dfa0f32bb207b89bc0558acc9"} Apr 17 10:18:08.980195 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:08.980148 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:08.980975 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:08.980960 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:08.981035 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:08.980988 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:08.981035 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:08.981002 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:08.981290 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:08.981274 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:08.981332 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:08.981326 2569 scope.go:117] "RemoveContainer" containerID="570dc6aa87e1813f66321b1cd7925c078d2dd21dfa0f32bb207b89bc0558acc9" Apr 17 10:18:08.990519 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:08.990451 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d915099d764\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d915099d764 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:07.980672868 +0000 UTC m=+4.639426927,LastTimestamp:2026-04-17 10:18:08.983120889 +0000 UTC m=+5.641874953,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:09.088797 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:09.088718 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d9156b06dbf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d9156b06dbf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:08.082816447 +0000 UTC m=+4.741570491,LastTimestamp:2026-04-17 10:18:09.078756971 +0000 UTC m=+5.737511030,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:09.097485 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:09.097391 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d91572e1ab9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d91572e1ab9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:08.091052729 +0000 UTC m=+4.749806791,LastTimestamp:2026-04-17 10:18:09.087177146 +0000 UTC m=+5.745931185,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:09.826588 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.826556 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:09.982773 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.982747 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:18:09.983170 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.983152 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/0.log" Apr 17 10:18:09.983488 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.983463 2569 generic.go:358] "Generic (PLEG): container finished" podID="89fe943de065f151bde50a0b04d91a20" containerID="d8cd22736e05b73c2549acb30c6571069326d12db568186668d4431a11e92182" exitCode=1 Apr 17 10:18:09.983595 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.983506 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" event={"ID":"89fe943de065f151bde50a0b04d91a20","Type":"ContainerDied","Data":"d8cd22736e05b73c2549acb30c6571069326d12db568186668d4431a11e92182"} Apr 17 10:18:09.983595 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.983540 2569 scope.go:117] "RemoveContainer" containerID="570dc6aa87e1813f66321b1cd7925c078d2dd21dfa0f32bb207b89bc0558acc9" Apr 17 10:18:09.983595 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.983550 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:09.984922 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.984907 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:09.984977 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.984940 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:09.984977 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.984957 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:09.985214 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:09.985198 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:09.985281 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:09.985247 2569 scope.go:117] "RemoveContainer" containerID="d8cd22736e05b73c2549acb30c6571069326d12db568186668d4431a11e92182" Apr 17 10:18:09.985415 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:09.985395 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_openshift-machine-config-operator(89fe943de065f151bde50a0b04d91a20)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" podUID="89fe943de065f151bde50a0b04d91a20" Apr 17 10:18:09.994400 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:09.994283 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d91c816bbd1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_openshift-machine-config-operator(89fe943de065f151bde50a0b04d91a20),Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:09.985346513 +0000 UTC m=+6.644100566,LastTimestamp:2026-04-17 10:18:09.985346513 +0000 UTC m=+6.644100566,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:10.072583 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:10.072555 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Apr 17 10:18:10.277849 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.277774 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:10.278844 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.278824 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:10.278948 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.278855 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:10.278948 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.278867 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:10.278948 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.278897 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:10.297742 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:10.297716 2569 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:10.825440 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.825409 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:10.921875 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:10.921850 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 10:18:10.985549 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.985529 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:18:10.985989 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.985976 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:10.986952 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.986935 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:10.987049 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.986964 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:10.987049 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.986974 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:10.987194 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:10.987180 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:10.987252 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:10.987226 2569 scope.go:117] "RemoveContainer" containerID="d8cd22736e05b73c2549acb30c6571069326d12db568186668d4431a11e92182" Apr 17 10:18:10.987401 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:10.987384 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_openshift-machine-config-operator(89fe943de065f151bde50a0b04d91a20)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" podUID="89fe943de065f151bde50a0b04d91a20" Apr 17 10:18:10.996056 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:10.995988 2569 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d91c816bbd1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal.18a71d91c816bbd1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal,UID:89fe943de065f151bde50a0b04d91a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_openshift-machine-config-operator(89fe943de065f151bde50a0b04d91a20),Source:EventSource{Component:kubelet,Host:ip-10-0-136-48.ec2.internal,},FirstTimestamp:2026-04-17 10:18:09.985346513 +0000 UTC m=+6.644100566,LastTimestamp:2026-04-17 10:18:10.987343938 +0000 UTC m=+7.646097990,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-136-48.ec2.internal,}" Apr 17 10:18:11.077220 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:11.077165 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 10:18:11.294907 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:11.294879 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 10:18:11.826557 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:11.826532 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:12.826223 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:12.826196 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:13.628647 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:13.628611 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 10:18:13.827230 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:13.827203 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:13.893561 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:13.893485 2569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:14.826347 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:14.826316 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:15.826249 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:15.826220 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:16.481956 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:16.481916 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 17 10:18:16.698110 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:16.698074 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:16.699252 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:16.699233 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:16.699317 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:16.699268 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:16.699317 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:16.699282 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:16.699317 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:16.699308 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:16.713846 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:16.713817 2569 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:16.826673 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:16.826648 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:17.827728 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:17.827690 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:18.825964 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:18.825929 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:19.826950 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:19.826921 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:20.478473 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:20.478393 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-136-48.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 10:18:20.828595 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:20.828571 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:21.827552 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:21.827520 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-48.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 10:18:22.803088 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:22.803058 2569 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-d6hd4" Apr 17 10:18:22.838109 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:22.838088 2569 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-48.ec2.internal" not found Apr 17 10:18:22.863330 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:22.863315 2569 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-48.ec2.internal" not found Apr 17 10:18:22.936602 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:22.936585 2569 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-48.ec2.internal" not found Apr 17 10:18:23.054478 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.054408 2569 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 10:18:23.210274 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.210245 2569 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-48.ec2.internal" not found Apr 17 10:18:23.210412 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:23.210292 2569 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-136-48.ec2.internal" not found Apr 17 10:18:23.255396 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.255370 2569 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 10:18:23.256229 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.256212 2569 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-48.ec2.internal" not found Apr 17 10:18:23.278199 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.278183 2569 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-48.ec2.internal" not found Apr 17 10:18:23.335628 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.335590 2569 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-48.ec2.internal" not found Apr 17 10:18:23.487894 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:23.487864 2569 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:23.594323 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.594271 2569 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-136-48.ec2.internal" not found Apr 17 10:18:23.594323 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:23.594294 2569 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-136-48.ec2.internal" not found Apr 17 10:18:23.714817 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.714791 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:23.715901 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.715883 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:23.716006 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.715919 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:23.716006 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.715934 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:23.716006 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.715971 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:23.724301 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.724287 2569 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 17 10:18:23.724420 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.724401 2569 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 17 10:18:23.724479 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.724414 2569 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 17 10:18:23.725726 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.725708 2569 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:23.725769 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:23.725733 2569 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-136-48.ec2.internal\": node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:23.750196 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:23.750173 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:23.804588 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.804539 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-16 10:13:22 +0000 UTC" deadline="2027-11-16 14:05:58.65716621 +0000 UTC" Apr 17 10:18:23.804588 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.804585 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="13875h47m34.852584178s" Apr 17 10:18:23.834265 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.834244 2569 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 17 10:18:23.850127 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.850084 2569 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 17 10:18:23.850239 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:23.850226 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:23.876074 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.876055 2569 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-vhsn8" Apr 17 10:18:23.883467 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.883448 2569 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-vhsn8" Apr 17 10:18:23.893613 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:23.893591 2569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:23.951134 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:23.951112 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:23.965412 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.965397 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:23.966254 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.966238 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:23.966316 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.966272 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:23.966316 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.966283 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:23.966544 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:23.966531 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:23.966596 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:23.966584 2569 scope.go:117] "RemoveContainer" containerID="d8cd22736e05b73c2549acb30c6571069326d12db568186668d4431a11e92182" Apr 17 10:18:24.051638 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:24.051616 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:24.152022 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:24.151965 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:24.252555 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:24.252530 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:24.353283 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:24.353254 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:24.453871 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:24.453826 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:24.554367 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:24.554326 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:24.654942 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:24.654915 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:24.755465 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:24.755421 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:24.856346 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:24.856323 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:24.884524 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:24.884493 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-16 10:13:23 +0000 UTC" deadline="2027-09-26 07:08:27.816969152 +0000 UTC" Apr 17 10:18:24.884524 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:24.884519 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12644h50m2.932453474s" Apr 17 10:18:24.957112 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:24.957093 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:25.005499 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:25.005451 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:18:25.005774 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:25.005752 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" event={"ID":"89fe943de065f151bde50a0b04d91a20","Type":"ContainerStarted","Data":"58276df9e64aa80e292cd7998191f8c06fe30defab013e812ae1011801c52081"} Apr 17 10:18:25.005900 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:25.005885 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 10:18:25.006860 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:25.006844 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientMemory" Apr 17 10:18:25.006952 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:25.006877 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 10:18:25.006952 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:25.006895 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeHasSufficientPID" Apr 17 10:18:25.007108 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.007093 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-136-48.ec2.internal\" not found" node="ip-10-0-136-48.ec2.internal" Apr 17 10:18:25.057609 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.057580 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:25.158109 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.158081 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:25.258715 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.258671 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:25.336747 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:25.336725 2569 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 10:18:25.359401 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.359378 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:25.459970 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.459952 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:25.560527 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.560509 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:25.661039 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.661013 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:25.761680 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.761659 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:25.862391 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.862334 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:25.885524 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:25.885498 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-16 10:13:23 +0000 UTC" deadline="2027-09-12 00:26:02.477664733 +0000 UTC" Apr 17 10:18:25.885524 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:25.885520 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12302h7m36.592147092s" Apr 17 10:18:25.962603 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:25.962584 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:26.063022 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:26.063000 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:26.163582 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:26.163528 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:26.264152 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:26.264135 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:26.364818 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:26.364797 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:26.465330 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:26.465292 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:26.566187 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:26.566168 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:26.667005 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:26.666984 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:26.767729 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:26.767680 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:26.868414 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:26.868350 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:26.969240 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:26.969219 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:27.069628 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:27.069608 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:27.170098 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:27.170070 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:27.271158 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:27.271139 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:27.371821 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:27.371781 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:27.472247 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:27.472230 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:27.572791 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:27.572765 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:27.673277 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:27.673240 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:27.773952 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:27.773929 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:27.874748 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:27.874722 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:27.975661 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:27.975608 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:28.075973 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:28.075940 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:28.176450 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:28.176419 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:28.277141 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:28.277096 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:28.378199 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:28.378175 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:28.478722 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:28.478700 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:28.579188 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:28.579169 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:28.679714 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:28.679695 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:28.780487 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:28.780469 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:28.881264 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:28.881216 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:28.981749 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:28.981720 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:29.081941 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:29.081917 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:29.182446 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:29.182393 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:29.283094 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:29.283073 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:29.383770 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:29.383744 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:29.484281 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:29.484231 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:29.585270 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:29.585253 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:29.685833 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:29.685807 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:29.786392 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:29.786374 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:29.887118 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:29.887096 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:29.987958 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:29.987929 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:30.088447 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:30.088351 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:30.188919 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:30.188883 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:30.289529 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:30.289509 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:30.390142 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:30.390099 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:30.490657 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:30.490638 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:30.591140 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:30.591119 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:30.691745 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:30.691675 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:30.792334 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:30.792302 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:30.893294 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:30.893257 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:30.994118 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:30.994048 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:31.094603 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:31.094560 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:31.195089 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:31.195052 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:31.295737 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:31.295718 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:31.396705 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:31.396679 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:31.497238 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:31.497201 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:31.597851 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:31.597792 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:31.698326 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:31.698293 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:31.799410 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:31.799369 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:31.900306 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:31.900248 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:32.000703 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:32.000672 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:32.101118 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:32.101084 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:32.201685 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:32.201621 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:32.302272 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:32.302251 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:32.402950 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:32.402924 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:32.503445 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:32.503408 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:32.603971 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:32.603952 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:32.704930 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:32.704912 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:32.805570 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:32.805550 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:32.906343 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:32.906302 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.007194 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.007154 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.107962 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.107875 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.208413 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.208377 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.309262 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.309225 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.409849 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.409793 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.510319 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.510291 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.610765 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.610740 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.711426 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.711374 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.812243 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.812217 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.894333 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.894298 2569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:33.912632 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:33.912609 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.013004 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.012942 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.113174 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.113139 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.145185 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.145166 2569 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-136-48.ec2.internal\": node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.213326 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.213293 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.314332 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.314301 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.414912 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.414880 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.515640 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.515603 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.616171 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.616109 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.717021 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.716995 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.817830 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.817804 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:34.918331 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:34.918276 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:35.018351 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:35.018331 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:35.118922 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:35.118898 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:35.219444 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:35.219391 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:35.320428 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:35.320399 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:35.421002 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:35.420962 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:35.522108 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:35.522048 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:35.622520 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:35.622498 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:35.723027 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:35.723005 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:35.824014 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:35.823989 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:35.924827 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:35.924798 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:36.025289 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:36.025259 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:36.126288 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:36.126228 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:36.226691 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:36.226659 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:36.327138 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:36.327118 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:36.427611 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:36.427570 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:36.528018 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:36.527996 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:36.628770 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:36.628750 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:36.729599 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:36.729561 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:36.830071 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:36.830043 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:36.930103 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:36.930077 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:37.030668 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:37.030652 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:37.131556 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:37.131532 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:37.232541 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:37.232515 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:37.333277 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:37.333226 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:37.433820 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:37.433793 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:37.534399 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:37.534378 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-136-48.ec2.internal\" not found" Apr 17 10:18:37.584944 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.584891 2569 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 10:18:37.625945 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.625918 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal" Apr 17 10:18:37.634441 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.634423 2569 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 10:18:37.634519 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.634507 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" Apr 17 10:18:37.646164 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.646145 2569 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 10:18:37.822986 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.822966 2569 apiserver.go:52] "Watching apiserver" Apr 17 10:18:37.828576 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.828558 2569 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 17 10:18:37.829904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.829876 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kwrj6","kube-system/konnectivity-agent-zm8zp","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6","openshift-dns/node-resolver-ksqkq","openshift-multus/multus-additional-cni-plugins-2t59m","openshift-multus/network-metrics-daemon-z6grc","openshift-network-diagnostics/network-check-target-gftlr","openshift-network-operator/iptables-alerter-vtt5p","openshift-ovn-kubernetes/ovnkube-node-qshzm","kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal","openshift-cluster-node-tuning-operator/tuned-hlkpd","openshift-image-registry/node-ca-22g8b","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal"] Apr 17 10:18:37.836890 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.836835 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kwrj6" Apr 17 10:18:37.839236 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.839216 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 17 10:18:37.839351 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.839261 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 17 10:18:37.839351 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.839277 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 17 10:18:37.839351 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.839278 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 17 10:18:37.839527 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.839465 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-kzmwn\"" Apr 17 10:18:37.840158 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.840137 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:37.840247 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.840231 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:37.842157 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.842138 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 17 10:18:37.842481 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.842384 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 17 10:18:37.842481 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.842398 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 17 10:18:37.842481 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.842458 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 17 10:18:37.842481 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.842474 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-gjf8s\"" Apr 17 10:18:37.842732 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.842488 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-tv4kt\"" Apr 17 10:18:37.842732 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.842493 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 17 10:18:37.843865 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.843588 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:37.845446 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.845406 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 17 10:18:37.845800 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.845783 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 17 10:18:37.845917 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.845899 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 17 10:18:37.845917 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.845914 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 17 10:18:37.846040 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.845930 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-glgn2\"" Apr 17 10:18:37.846040 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.845958 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 17 10:18:37.846040 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.845980 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 17 10:18:37.847068 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.847055 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:37.848884 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.848861 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 17 10:18:37.848884 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.848876 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 17 10:18:37.849039 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.848980 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-c4lzl\"" Apr 17 10:18:37.850138 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.850121 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:37.851994 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.851978 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-dvbjs\"" Apr 17 10:18:37.851994 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.851991 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 17 10:18:37.852255 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.852239 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 17 10:18:37.852305 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.852290 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 17 10:18:37.853224 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.853206 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:37.853317 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:37.853270 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:37.856900 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.856882 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:37.856993 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:37.856949 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:37.859778 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.859748 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:37.862462 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.862183 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 17 10:18:37.862462 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.862199 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 17 10:18:37.862702 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.862673 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-4cpp5\"" Apr 17 10:18:37.866261 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.866239 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:37.866369 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.866245 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:37.868928 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.868911 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 17 10:18:37.869455 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.869438 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 17 10:18:37.869584 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.869566 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-wrk6p\"" Apr 17 10:18:37.869652 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.869575 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 17 10:18:37.869711 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.869651 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 17 10:18:37.870029 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.870015 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 17 10:18:37.870080 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.870016 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tbk5h\"" Apr 17 10:18:37.927618 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.927596 2569 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 10:18:37.985411 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:37.985350 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-136-48.ec2.internal" podStartSLOduration=0.985337259 podStartE2EDuration="985.337259ms" podCreationTimestamp="2026-04-17 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 10:18:37.985335671 +0000 UTC m=+34.644089732" watchObservedRunningTime="2026-04-17 10:18:37.985337259 +0000 UTC m=+34.644091320" Apr 17 10:18:38.011524 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011496 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4cac3107-7535-4daf-bf6b-d5bf95844303-cni-binary-copy\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.011641 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011534 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-run-netns\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.011641 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011558 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-kubelet\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.011641 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011599 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-log-socket\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.011641 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011633 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-ovnkube-script-lib\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.011845 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011656 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/90ac1d6e-66e2-4de9-8433-b5d2a8895e80-hosts-file\") pod \"node-resolver-ksqkq\" (UID: \"90ac1d6e-66e2-4de9-8433-b5d2a8895e80\") " pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:38.011845 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011696 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-cnibin\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.011845 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011743 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-socket-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.011845 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011769 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.011845 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011798 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-var-lib-kubelet\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.011845 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011837 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-cnibin\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.012084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011872 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-var-lib-cni-multus\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.012084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011893 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-run-multus-certs\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.012084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011909 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aaa97102-f10d-49b4-83af-c47d0b2cd496-host-slash\") pod \"iptables-alerter-vtt5p\" (UID: \"aaa97102-f10d-49b4-83af-c47d0b2cd496\") " pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:38.012084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011928 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-ovn-node-metrics-cert\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.012084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011944 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rttp8\" (UniqueName: \"kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8\") pod \"network-check-target-gftlr\" (UID: \"1abeaef1-047c-4fff-a659-456e05294f94\") " pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:38.012084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.011988 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d8z9\" (UniqueName: \"kubernetes.io/projected/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-kube-api-access-7d8z9\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.012084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012019 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-etc-selinux\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.012084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012048 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4cac3107-7535-4daf-bf6b-d5bf95844303-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.012084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012072 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-kubernetes\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012089 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-systemd\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012109 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-tuned\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012124 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-ovnkube-config\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012139 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-sysconfig\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012168 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-var-lib-cni-bin\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012203 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-etc-kubernetes\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012223 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-run-ovn\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012244 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-multus-conf-dir\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012264 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-run-ovn-kubernetes\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012279 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-cni-bin\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012296 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv86d\" (UniqueName: \"kubernetes.io/projected/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-kube-api-access-wv86d\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012315 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/083d1f1c-be08-410d-a728-2affe73763a9-serviceca\") pod \"node-ca-22g8b\" (UID: \"083d1f1c-be08-410d-a728-2affe73763a9\") " pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012337 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-registration-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012367 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4cac3107-7535-4daf-bf6b-d5bf95844303-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012383 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aaa97102-f10d-49b4-83af-c47d0b2cd496-iptables-alerter-script\") pod \"iptables-alerter-vtt5p\" (UID: \"aaa97102-f10d-49b4-83af-c47d0b2cd496\") " pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:38.012458 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012415 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/db889039-4b7b-4564-b656-afd928d6bcbd-multus-daemon-config\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012446 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-slash\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012469 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-node-log\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012493 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/083d1f1c-be08-410d-a728-2affe73763a9-host\") pod \"node-ca-22g8b\" (UID: \"083d1f1c-be08-410d-a728-2affe73763a9\") " pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012519 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bjs4\" (UniqueName: \"kubernetes.io/projected/4cac3107-7535-4daf-bf6b-d5bf95844303-kube-api-access-7bjs4\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012545 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-multus-cni-dir\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012571 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-lib-modules\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012594 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-host\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012626 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-os-release\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012651 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-run-openvswitch\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012682 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-kubelet-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012717 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-os-release\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012751 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012777 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwbn6\" (UniqueName: \"kubernetes.io/projected/083d1f1c-be08-410d-a728-2affe73763a9-kube-api-access-pwbn6\") pod \"node-ca-22g8b\" (UID: \"083d1f1c-be08-410d-a728-2affe73763a9\") " pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012792 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-run\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012806 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-var-lib-kubelet\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012821 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-var-lib-openvswitch\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.013100 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012835 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdxnm\" (UniqueName: \"kubernetes.io/projected/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-kube-api-access-vdxnm\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012850 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-multus-socket-dir-parent\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012864 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b43f5610-f9dd-49c2-9de2-5c1cca09f0d6-agent-certs\") pod \"konnectivity-agent-zm8zp\" (UID: \"b43f5610-f9dd-49c2-9de2-5c1cca09f0d6\") " pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012877 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b43f5610-f9dd-49c2-9de2-5c1cca09f0d6-konnectivity-ca\") pod \"konnectivity-agent-zm8zp\" (UID: \"b43f5610-f9dd-49c2-9de2-5c1cca09f0d6\") " pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012899 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-run-netns\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012913 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-run-systemd\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012936 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-cni-netd\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.012982 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013005 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-device-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013019 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-modprobe-d\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013032 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db889039-4b7b-4564-b656-afd928d6bcbd-cni-binary-copy\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013053 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w75n\" (UniqueName: \"kubernetes.io/projected/db889039-4b7b-4564-b656-afd928d6bcbd-kube-api-access-5w75n\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013073 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-env-overrides\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013092 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474rc\" (UniqueName: \"kubernetes.io/projected/56255f22-7072-487b-8723-978c296878fb-kube-api-access-474rc\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013106 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-sys-fs\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013126 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-sysctl-d\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.013727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013147 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-sysctl-conf\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013170 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-system-cni-dir\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013190 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-etc-openvswitch\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013208 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-system-cni-dir\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013225 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6852\" (UniqueName: \"kubernetes.io/projected/aaa97102-f10d-49b4-83af-c47d0b2cd496-kube-api-access-d6852\") pod \"iptables-alerter-vtt5p\" (UID: \"aaa97102-f10d-49b4-83af-c47d0b2cd496\") " pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013246 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-sys\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013259 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-run-k8s-cni-cncf-io\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013282 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-systemd-units\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013323 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/90ac1d6e-66e2-4de9-8433-b5d2a8895e80-tmp-dir\") pod \"node-resolver-ksqkq\" (UID: \"90ac1d6e-66e2-4de9-8433-b5d2a8895e80\") " pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013373 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfsfh\" (UniqueName: \"kubernetes.io/projected/90ac1d6e-66e2-4de9-8433-b5d2a8895e80-kube-api-access-gfsfh\") pod \"node-resolver-ksqkq\" (UID: \"90ac1d6e-66e2-4de9-8433-b5d2a8895e80\") " pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013393 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-tmp\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.014180 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.013410 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-hostroot\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.033466 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.033424 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal" podStartSLOduration=1.033413926 podStartE2EDuration="1.033413926s" podCreationTimestamp="2026-04-17 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 10:18:38.033262233 +0000 UTC m=+34.692016296" watchObservedRunningTime="2026-04-17 10:18:38.033413926 +0000 UTC m=+34.692167988" Apr 17 10:18:38.114413 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114345 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4cac3107-7535-4daf-bf6b-d5bf95844303-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.114413 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114383 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-kubernetes\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.114413 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114399 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-systemd\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.114565 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114452 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-systemd\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.114565 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114510 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-tuned\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.114565 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114512 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-kubernetes\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.114565 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114538 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-ovnkube-config\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.114715 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114565 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-sysconfig\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.114715 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114589 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-var-lib-cni-bin\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.114715 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114612 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-etc-kubernetes\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.114715 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114635 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-run-ovn\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.114715 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114658 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-multus-conf-dir\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.114715 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114676 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-sysconfig\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.114715 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114681 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-run-ovn-kubernetes\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.114715 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114697 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-var-lib-cni-bin\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114720 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-run-ovn-kubernetes\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114706 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-etc-kubernetes\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114732 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-cni-bin\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114750 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-multus-conf-dir\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114760 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-run-ovn\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114759 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wv86d\" (UniqueName: \"kubernetes.io/projected/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-kube-api-access-wv86d\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114788 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/083d1f1c-be08-410d-a728-2affe73763a9-serviceca\") pod \"node-ca-22g8b\" (UID: \"083d1f1c-be08-410d-a728-2affe73763a9\") " pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114812 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-registration-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114806 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-cni-bin\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114847 2569 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114929 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-registration-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114843 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4cac3107-7535-4daf-bf6b-d5bf95844303-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114947 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4cac3107-7535-4daf-bf6b-d5bf95844303-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.114979 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aaa97102-f10d-49b4-83af-c47d0b2cd496-iptables-alerter-script\") pod \"iptables-alerter-vtt5p\" (UID: \"aaa97102-f10d-49b4-83af-c47d0b2cd496\") " pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115003 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/db889039-4b7b-4564-b656-afd928d6bcbd-multus-daemon-config\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115028 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-slash\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115051 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-node-log\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115102 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115075 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/083d1f1c-be08-410d-a728-2affe73763a9-host\") pod \"node-ca-22g8b\" (UID: \"083d1f1c-be08-410d-a728-2affe73763a9\") " pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115098 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7bjs4\" (UniqueName: \"kubernetes.io/projected/4cac3107-7535-4daf-bf6b-d5bf95844303-kube-api-access-7bjs4\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115124 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-multus-cni-dir\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115146 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-lib-modules\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115169 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-ovnkube-config\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115213 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-node-log\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115222 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-host\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115255 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/083d1f1c-be08-410d-a728-2affe73763a9-host\") pod \"node-ca-22g8b\" (UID: \"083d1f1c-be08-410d-a728-2affe73763a9\") " pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115170 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-host\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115292 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-slash\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115294 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-os-release\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115295 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4cac3107-7535-4daf-bf6b-d5bf95844303-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115321 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-run-openvswitch\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115343 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-os-release\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115350 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-kubelet-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115395 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-os-release\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115400 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-multus-cni-dir\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115430 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:38.115934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115445 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-os-release\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115453 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pwbn6\" (UniqueName: \"kubernetes.io/projected/083d1f1c-be08-410d-a728-2affe73763a9-kube-api-access-pwbn6\") pod \"node-ca-22g8b\" (UID: \"083d1f1c-be08-410d-a728-2affe73763a9\") " pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115473 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-run\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115486 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-run-openvswitch\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115495 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-var-lib-kubelet\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115521 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-var-lib-openvswitch\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115530 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-kubelet-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115544 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxnm\" (UniqueName: \"kubernetes.io/projected/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-kube-api-access-vdxnm\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115566 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-multus-socket-dir-parent\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115589 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b43f5610-f9dd-49c2-9de2-5c1cca09f0d6-agent-certs\") pod \"konnectivity-agent-zm8zp\" (UID: \"b43f5610-f9dd-49c2-9de2-5c1cca09f0d6\") " pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115590 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/083d1f1c-be08-410d-a728-2affe73763a9-serviceca\") pod \"node-ca-22g8b\" (UID: \"083d1f1c-be08-410d-a728-2affe73763a9\") " pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115602 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/db889039-4b7b-4564-b656-afd928d6bcbd-multus-daemon-config\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115610 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b43f5610-f9dd-49c2-9de2-5c1cca09f0d6-konnectivity-ca\") pod \"konnectivity-agent-zm8zp\" (UID: \"b43f5610-f9dd-49c2-9de2-5c1cca09f0d6\") " pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115633 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-lib-modules\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115641 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-var-lib-kubelet\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115586 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aaa97102-f10d-49b4-83af-c47d0b2cd496-iptables-alerter-script\") pod \"iptables-alerter-vtt5p\" (UID: \"aaa97102-f10d-49b4-83af-c47d0b2cd496\") " pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115653 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-run-netns\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115685 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-run-systemd\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.116745 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.115711 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115741 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-run-systemd\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115750 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-cni-netd\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115688 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-multus-socket-dir-parent\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115712 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-cni-netd\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.115796 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs podName:56255f22-7072-487b-8723-978c296878fb nodeName:}" failed. No retries permitted until 2026-04-17 10:18:38.615748102 +0000 UTC m=+35.274502143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs") pod "network-metrics-daemon-z6grc" (UID: "56255f22-7072-487b-8723-978c296878fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115795 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-run-netns\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115810 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-var-lib-openvswitch\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115869 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115882 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-run\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115899 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-device-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115926 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115941 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-modprobe-d\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115959 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-device-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115970 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db889039-4b7b-4564-b656-afd928d6bcbd-cni-binary-copy\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.115998 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5w75n\" (UniqueName: \"kubernetes.io/projected/db889039-4b7b-4564-b656-afd928d6bcbd-kube-api-access-5w75n\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.117572 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116058 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-modprobe-d\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116110 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-env-overrides\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116152 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-474rc\" (UniqueName: \"kubernetes.io/projected/56255f22-7072-487b-8723-978c296878fb-kube-api-access-474rc\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116178 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-sys-fs\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116180 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b43f5610-f9dd-49c2-9de2-5c1cca09f0d6-konnectivity-ca\") pod \"konnectivity-agent-zm8zp\" (UID: \"b43f5610-f9dd-49c2-9de2-5c1cca09f0d6\") " pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116201 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-sysctl-d\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116225 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-sysctl-conf\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116247 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-sys-fs\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116252 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-system-cni-dir\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116299 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-system-cni-dir\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116305 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-etc-openvswitch\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116332 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-system-cni-dir\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116375 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d6852\" (UniqueName: \"kubernetes.io/projected/aaa97102-f10d-49b4-83af-c47d0b2cd496-kube-api-access-d6852\") pod \"iptables-alerter-vtt5p\" (UID: \"aaa97102-f10d-49b4-83af-c47d0b2cd496\") " pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116399 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-sys\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116404 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-etc-openvswitch\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116423 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-run-k8s-cni-cncf-io\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116435 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-sysctl-conf\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.118296 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116481 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-systemd-units\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116487 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-system-cni-dir\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116440 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-systemd-units\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116375 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-sysctl-d\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116514 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/90ac1d6e-66e2-4de9-8433-b5d2a8895e80-tmp-dir\") pod \"node-resolver-ksqkq\" (UID: \"90ac1d6e-66e2-4de9-8433-b5d2a8895e80\") " pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116542 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfsfh\" (UniqueName: \"kubernetes.io/projected/90ac1d6e-66e2-4de9-8433-b5d2a8895e80-kube-api-access-gfsfh\") pod \"node-resolver-ksqkq\" (UID: \"90ac1d6e-66e2-4de9-8433-b5d2a8895e80\") " pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116546 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-env-overrides\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116563 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-tmp\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116589 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-hostroot\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116591 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-run-k8s-cni-cncf-io\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116602 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-sys\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116604 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4cac3107-7535-4daf-bf6b-d5bf95844303-cni-binary-copy\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116642 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-run-netns\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116658 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-kubelet\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116673 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-log-socket\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116688 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-ovnkube-script-lib\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116717 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/90ac1d6e-66e2-4de9-8433-b5d2a8895e80-hosts-file\") pod \"node-resolver-ksqkq\" (UID: \"90ac1d6e-66e2-4de9-8433-b5d2a8895e80\") " pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116743 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-cnibin\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.119125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116789 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-host-kubelet\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116788 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-hostroot\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116838 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-run-netns\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116877 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-log-socket\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116905 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-socket-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116932 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116956 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-var-lib-kubelet\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.116982 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-cnibin\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117005 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-var-lib-cni-multus\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117029 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-run-multus-certs\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117067 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aaa97102-f10d-49b4-83af-c47d0b2cd496-host-slash\") pod \"iptables-alerter-vtt5p\" (UID: \"aaa97102-f10d-49b4-83af-c47d0b2cd496\") " pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117096 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-ovn-node-metrics-cert\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117099 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-cnibin\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117118 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rttp8\" (UniqueName: \"kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8\") pod \"network-check-target-gftlr\" (UID: \"1abeaef1-047c-4fff-a659-456e05294f94\") " pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117157 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7d8z9\" (UniqueName: \"kubernetes.io/projected/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-kube-api-access-7d8z9\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117165 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4cac3107-7535-4daf-bf6b-d5bf95844303-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117187 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aaa97102-f10d-49b4-83af-c47d0b2cd496-host-slash\") pod \"iptables-alerter-vtt5p\" (UID: \"aaa97102-f10d-49b4-83af-c47d0b2cd496\") " pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:38.119904 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117190 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-var-lib-cni-multus\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117187 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-etc-selinux\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117258 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-etc-selinux\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117273 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-socket-dir\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117293 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-var-lib-kubelet\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117323 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/90ac1d6e-66e2-4de9-8433-b5d2a8895e80-hosts-file\") pod \"node-resolver-ksqkq\" (UID: \"90ac1d6e-66e2-4de9-8433-b5d2a8895e80\") " pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117690 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4cac3107-7535-4daf-bf6b-d5bf95844303-cni-binary-copy\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117936 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-cnibin\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.117973 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/db889039-4b7b-4564-b656-afd928d6bcbd-host-run-multus-certs\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.118073 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-ovnkube-script-lib\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.118116 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db889039-4b7b-4564-b656-afd928d6bcbd-cni-binary-copy\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.118487 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/90ac1d6e-66e2-4de9-8433-b5d2a8895e80-tmp-dir\") pod \"node-resolver-ksqkq\" (UID: \"90ac1d6e-66e2-4de9-8433-b5d2a8895e80\") " pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.119951 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-tmp\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.120147 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-etc-tuned\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.120229 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b43f5610-f9dd-49c2-9de2-5c1cca09f0d6-agent-certs\") pod \"konnectivity-agent-zm8zp\" (UID: \"b43f5610-f9dd-49c2-9de2-5c1cca09f0d6\") " pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:38.120634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.120309 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-ovn-node-metrics-cert\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.125606 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.125584 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 10:18:38.125606 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.125607 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 10:18:38.125767 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.125618 2569 projected.go:194] Error preparing data for projected volume kube-api-access-rttp8 for pod openshift-network-diagnostics/network-check-target-gftlr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:38.125767 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.125696 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8 podName:1abeaef1-047c-4fff-a659-456e05294f94 nodeName:}" failed. No retries permitted until 2026-04-17 10:18:38.625673087 +0000 UTC m=+35.284427141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rttp8" (UniqueName: "kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8") pod "network-check-target-gftlr" (UID: "1abeaef1-047c-4fff-a659-456e05294f94") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:38.126859 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.126808 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv86d\" (UniqueName: \"kubernetes.io/projected/2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5-kube-api-access-wv86d\") pod \"ovnkube-node-qshzm\" (UID: \"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5\") " pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.127377 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.127318 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxnm\" (UniqueName: \"kubernetes.io/projected/83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e-kube-api-access-vdxnm\") pod \"tuned-hlkpd\" (UID: \"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e\") " pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.127883 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.127839 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6852\" (UniqueName: \"kubernetes.io/projected/aaa97102-f10d-49b4-83af-c47d0b2cd496-kube-api-access-d6852\") pod \"iptables-alerter-vtt5p\" (UID: \"aaa97102-f10d-49b4-83af-c47d0b2cd496\") " pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:38.128644 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.128517 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwbn6\" (UniqueName: \"kubernetes.io/projected/083d1f1c-be08-410d-a728-2affe73763a9-kube-api-access-pwbn6\") pod \"node-ca-22g8b\" (UID: \"083d1f1c-be08-410d-a728-2affe73763a9\") " pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:38.128790 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.128765 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-474rc\" (UniqueName: \"kubernetes.io/projected/56255f22-7072-487b-8723-978c296878fb-kube-api-access-474rc\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:38.128901 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.128884 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d8z9\" (UniqueName: \"kubernetes.io/projected/17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f-kube-api-access-7d8z9\") pod \"aws-ebs-csi-driver-node-8tvl6\" (UID: \"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.129154 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.129135 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfsfh\" (UniqueName: \"kubernetes.io/projected/90ac1d6e-66e2-4de9-8433-b5d2a8895e80-kube-api-access-gfsfh\") pod \"node-resolver-ksqkq\" (UID: \"90ac1d6e-66e2-4de9-8433-b5d2a8895e80\") " pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:38.129154 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.129142 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bjs4\" (UniqueName: \"kubernetes.io/projected/4cac3107-7535-4daf-bf6b-d5bf95844303-kube-api-access-7bjs4\") pod \"multus-additional-cni-plugins-2t59m\" (UID: \"4cac3107-7535-4daf-bf6b-d5bf95844303\") " pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.129493 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.129478 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w75n\" (UniqueName: \"kubernetes.io/projected/db889039-4b7b-4564-b656-afd928d6bcbd-kube-api-access-5w75n\") pod \"multus-kwrj6\" (UID: \"db889039-4b7b-4564-b656-afd928d6bcbd\") " pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.146293 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.146267 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kwrj6" Apr 17 10:18:38.151849 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.151831 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:38.152847 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:38.152829 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb889039_4b7b_4564_b656_afd928d6bcbd.slice/crio-cf6f0d920b64df1b4030d0cf27006e714ea2b35ab4824f3650c9485f0520c43c WatchSource:0}: Error finding container cf6f0d920b64df1b4030d0cf27006e714ea2b35ab4824f3650c9485f0520c43c: Status 404 returned error can't find the container with id cf6f0d920b64df1b4030d0cf27006e714ea2b35ab4824f3650c9485f0520c43c Apr 17 10:18:38.157417 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:38.157398 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb43f5610_f9dd_49c2_9de2_5c1cca09f0d6.slice/crio-de5742a77d8689cca1cd5acbde8b32bde3296cd7661a1814dda82da088265c3c WatchSource:0}: Error finding container de5742a77d8689cca1cd5acbde8b32bde3296cd7661a1814dda82da088265c3c: Status 404 returned error can't find the container with id de5742a77d8689cca1cd5acbde8b32bde3296cd7661a1814dda82da088265c3c Apr 17 10:18:38.158990 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.158946 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" Apr 17 10:18:38.164164 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.164144 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:18:38.166529 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:38.166500 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17dd506b_8cdc_44d7_9c7d_6ae2a2084b0f.slice/crio-718d6a356ad7fd58b8503b7d4ebcdf785b94962ca6619409abbb5d8c0b90f403 WatchSource:0}: Error finding container 718d6a356ad7fd58b8503b7d4ebcdf785b94962ca6619409abbb5d8c0b90f403: Status 404 returned error can't find the container with id 718d6a356ad7fd58b8503b7d4ebcdf785b94962ca6619409abbb5d8c0b90f403 Apr 17 10:18:38.170922 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.170900 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2t59m" Apr 17 10:18:38.171122 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:38.171106 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b7b4d04_9c9a_4878_81fa_d9c6f965d3a5.slice/crio-ef0a4cec415dc763640458093ba5774cc596fb77a9604f0d420c2c59d9b9eee7 WatchSource:0}: Error finding container ef0a4cec415dc763640458093ba5774cc596fb77a9604f0d420c2c59d9b9eee7: Status 404 returned error can't find the container with id ef0a4cec415dc763640458093ba5774cc596fb77a9604f0d420c2c59d9b9eee7 Apr 17 10:18:38.176251 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.176233 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-vtt5p" Apr 17 10:18:38.176409 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:38.176391 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cac3107_7535_4daf_bf6b_d5bf95844303.slice/crio-a5876a1c347ec3eaa1f5b23b2dea14b59c596ed7ece7976da6fc9c9082a6ce11 WatchSource:0}: Error finding container a5876a1c347ec3eaa1f5b23b2dea14b59c596ed7ece7976da6fc9c9082a6ce11: Status 404 returned error can't find the container with id a5876a1c347ec3eaa1f5b23b2dea14b59c596ed7ece7976da6fc9c9082a6ce11 Apr 17 10:18:38.181075 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.181059 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ksqkq" Apr 17 10:18:38.182036 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:38.182018 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaa97102_f10d_49b4_83af_c47d0b2cd496.slice/crio-b241538e42d11e6951ac7ec4c4116427dd01b181bb821073faedeab0b1d191a3 WatchSource:0}: Error finding container b241538e42d11e6951ac7ec4c4116427dd01b181bb821073faedeab0b1d191a3: Status 404 returned error can't find the container with id b241538e42d11e6951ac7ec4c4116427dd01b181bb821073faedeab0b1d191a3 Apr 17 10:18:38.186323 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.186304 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-22g8b" Apr 17 10:18:38.186596 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:38.186572 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90ac1d6e_66e2_4de9_8433_b5d2a8895e80.slice/crio-b8315c86560958954500142b66190c8390158e63c6a8b6db1826f633fe04e267 WatchSource:0}: Error finding container b8315c86560958954500142b66190c8390158e63c6a8b6db1826f633fe04e267: Status 404 returned error can't find the container with id b8315c86560958954500142b66190c8390158e63c6a8b6db1826f633fe04e267 Apr 17 10:18:38.190200 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.190176 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" Apr 17 10:18:38.194448 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:38.194429 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod083d1f1c_be08_410d_a728_2affe73763a9.slice/crio-a639bd6bd86fc6fc901f78b50b91c1852d7966cbb23f642433c9a697a500af1c WatchSource:0}: Error finding container a639bd6bd86fc6fc901f78b50b91c1852d7966cbb23f642433c9a697a500af1c: Status 404 returned error can't find the container with id a639bd6bd86fc6fc901f78b50b91c1852d7966cbb23f642433c9a697a500af1c Apr 17 10:18:38.197331 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:38.197311 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83c3ab0b_7fc8_4489_9ad8_7ac887fbde2e.slice/crio-574eaafbe88a1714802505615ae6e50e42f5f860479a54cba801319be94794a5 WatchSource:0}: Error finding container 574eaafbe88a1714802505615ae6e50e42f5f860479a54cba801319be94794a5: Status 404 returned error can't find the container with id 574eaafbe88a1714802505615ae6e50e42f5f860479a54cba801319be94794a5 Apr 17 10:18:38.621707 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.621672 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:38.621879 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.621826 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:38.621945 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.621918 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs podName:56255f22-7072-487b-8723-978c296878fb nodeName:}" failed. No retries permitted until 2026-04-17 10:18:39.621899476 +0000 UTC m=+36.280653521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs") pod "network-metrics-daemon-z6grc" (UID: "56255f22-7072-487b-8723-978c296878fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:38.724852 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:38.722177 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rttp8\" (UniqueName: \"kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8\") pod \"network-check-target-gftlr\" (UID: \"1abeaef1-047c-4fff-a659-456e05294f94\") " pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:38.724852 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.722376 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 10:18:38.724852 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.722395 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 10:18:38.724852 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.722409 2569 projected.go:194] Error preparing data for projected volume kube-api-access-rttp8 for pod openshift-network-diagnostics/network-check-target-gftlr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:38.724852 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:38.722464 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8 podName:1abeaef1-047c-4fff-a659-456e05294f94 nodeName:}" failed. No retries permitted until 2026-04-17 10:18:39.72244545 +0000 UTC m=+36.381199494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rttp8" (UniqueName: "kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8") pod "network-check-target-gftlr" (UID: "1abeaef1-047c-4fff-a659-456e05294f94") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:39.032916 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.032875 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" event={"ID":"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5","Type":"ContainerStarted","Data":"ef0a4cec415dc763640458093ba5774cc596fb77a9604f0d420c2c59d9b9eee7"} Apr 17 10:18:39.051285 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.051253 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" event={"ID":"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f","Type":"ContainerStarted","Data":"718d6a356ad7fd58b8503b7d4ebcdf785b94962ca6619409abbb5d8c0b90f403"} Apr 17 10:18:39.060742 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.060716 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-zm8zp" event={"ID":"b43f5610-f9dd-49c2-9de2-5c1cca09f0d6","Type":"ContainerStarted","Data":"de5742a77d8689cca1cd5acbde8b32bde3296cd7661a1814dda82da088265c3c"} Apr 17 10:18:39.072328 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.072303 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kwrj6" event={"ID":"db889039-4b7b-4564-b656-afd928d6bcbd","Type":"ContainerStarted","Data":"cf6f0d920b64df1b4030d0cf27006e714ea2b35ab4824f3650c9485f0520c43c"} Apr 17 10:18:39.079424 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.079391 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-22g8b" event={"ID":"083d1f1c-be08-410d-a728-2affe73763a9","Type":"ContainerStarted","Data":"a639bd6bd86fc6fc901f78b50b91c1852d7966cbb23f642433c9a697a500af1c"} Apr 17 10:18:39.102206 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.102182 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ksqkq" event={"ID":"90ac1d6e-66e2-4de9-8433-b5d2a8895e80","Type":"ContainerStarted","Data":"b8315c86560958954500142b66190c8390158e63c6a8b6db1826f633fe04e267"} Apr 17 10:18:39.105803 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.105752 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-vtt5p" event={"ID":"aaa97102-f10d-49b4-83af-c47d0b2cd496","Type":"ContainerStarted","Data":"b241538e42d11e6951ac7ec4c4116427dd01b181bb821073faedeab0b1d191a3"} Apr 17 10:18:39.116642 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.116620 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" event={"ID":"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e","Type":"ContainerStarted","Data":"574eaafbe88a1714802505615ae6e50e42f5f860479a54cba801319be94794a5"} Apr 17 10:18:39.121925 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.121903 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2t59m" event={"ID":"4cac3107-7535-4daf-bf6b-d5bf95844303","Type":"ContainerStarted","Data":"a5876a1c347ec3eaa1f5b23b2dea14b59c596ed7ece7976da6fc9c9082a6ce11"} Apr 17 10:18:39.629737 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.629700 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:39.629917 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:39.629895 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:39.629990 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:39.629967 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs podName:56255f22-7072-487b-8723-978c296878fb nodeName:}" failed. No retries permitted until 2026-04-17 10:18:41.629947512 +0000 UTC m=+38.288701568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs") pod "network-metrics-daemon-z6grc" (UID: "56255f22-7072-487b-8723-978c296878fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:39.730629 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.730598 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rttp8\" (UniqueName: \"kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8\") pod \"network-check-target-gftlr\" (UID: \"1abeaef1-047c-4fff-a659-456e05294f94\") " pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:39.730800 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:39.730745 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 10:18:39.730800 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:39.730761 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 10:18:39.730800 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:39.730773 2569 projected.go:194] Error preparing data for projected volume kube-api-access-rttp8 for pod openshift-network-diagnostics/network-check-target-gftlr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:39.730955 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:39.730824 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8 podName:1abeaef1-047c-4fff-a659-456e05294f94 nodeName:}" failed. No retries permitted until 2026-04-17 10:18:41.730807049 +0000 UTC m=+38.389561093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rttp8" (UniqueName: "kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8") pod "network-check-target-gftlr" (UID: "1abeaef1-047c-4fff-a659-456e05294f94") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:39.966568 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.966494 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:39.966727 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:39.966637 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:39.966727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:39.966688 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:39.966854 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:39.966796 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:41.647516 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:41.646891 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:41.647516 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:41.647104 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:41.647516 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:41.647162 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs podName:56255f22-7072-487b-8723-978c296878fb nodeName:}" failed. No retries permitted until 2026-04-17 10:18:45.64714419 +0000 UTC m=+42.305898235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs") pod "network-metrics-daemon-z6grc" (UID: "56255f22-7072-487b-8723-978c296878fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:41.748210 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:41.748173 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rttp8\" (UniqueName: \"kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8\") pod \"network-check-target-gftlr\" (UID: \"1abeaef1-047c-4fff-a659-456e05294f94\") " pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:41.748417 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:41.748349 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 10:18:41.748417 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:41.748392 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 10:18:41.748417 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:41.748404 2569 projected.go:194] Error preparing data for projected volume kube-api-access-rttp8 for pod openshift-network-diagnostics/network-check-target-gftlr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:41.748600 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:41.748482 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8 podName:1abeaef1-047c-4fff-a659-456e05294f94 nodeName:}" failed. No retries permitted until 2026-04-17 10:18:45.748452825 +0000 UTC m=+42.407206876 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rttp8" (UniqueName: "kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8") pod "network-check-target-gftlr" (UID: "1abeaef1-047c-4fff-a659-456e05294f94") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:41.763970 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:41.763683 2569 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 10:18:41.965576 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:41.965433 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:41.965576 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:41.965476 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:41.965791 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:41.965585 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:41.965791 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:41.965652 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:43.543229 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.543062 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-mj2c9"] Apr 17 10:18:43.546435 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.546178 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.553245 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.552872 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 17 10:18:43.553245 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.552922 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 17 10:18:43.553245 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.552876 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 17 10:18:43.553245 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.553187 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 17 10:18:43.553540 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.553302 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-qk24s\"" Apr 17 10:18:43.554145 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.553993 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 17 10:18:43.554475 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.554238 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 17 10:18:43.561903 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.561705 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-wtmp\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.561903 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.561732 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-accelerators-collector-config\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.561903 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.561783 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-metrics-client-ca\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.561903 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.561810 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-tls\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.561903 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.561836 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.561903 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.561886 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-root\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.562230 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.561911 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-textfile\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.562230 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.561938 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-sys\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.562230 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.561958 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmqrf\" (UniqueName: \"kubernetes.io/projected/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-kube-api-access-lmqrf\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.663221 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.663194 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-wtmp\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.663343 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.663277 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-accelerators-collector-config\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.663438 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.663341 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-metrics-client-ca\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.663438 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.663387 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-tls\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.663438 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.663412 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.663584 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.663445 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-root\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.663584 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.663502 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-textfile\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.663584 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.663531 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-sys\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.663584 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.663560 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmqrf\" (UniqueName: \"kubernetes.io/projected/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-kube-api-access-lmqrf\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.664843 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.663986 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-wtmp\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.664843 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.664646 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-accelerators-collector-config\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.664843 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.664724 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-sys\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.664843 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.664777 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-root\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.664843 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.664802 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-metrics-client-ca\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.665662 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.665618 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-textfile\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.668808 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.668760 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.668940 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.668835 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-node-exporter-tls\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.672495 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.672452 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmqrf\" (UniqueName: \"kubernetes.io/projected/8f424dee-1a61-4c87-8a31-3c6ab909fcc4-kube-api-access-lmqrf\") pod \"node-exporter-mj2c9\" (UID: \"8f424dee-1a61-4c87-8a31-3c6ab909fcc4\") " pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.859312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.859230 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mj2c9" Apr 17 10:18:43.967153 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.966340 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:43.967153 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:43.966690 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:43.967153 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:43.966476 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:43.967153 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:43.967112 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:45.681759 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:45.681660 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:45.682194 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:45.681861 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:45.682194 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:45.681924 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs podName:56255f22-7072-487b-8723-978c296878fb nodeName:}" failed. No retries permitted until 2026-04-17 10:18:53.681903538 +0000 UTC m=+50.340657581 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs") pod "network-metrics-daemon-z6grc" (UID: "56255f22-7072-487b-8723-978c296878fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:45.782374 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:45.782326 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rttp8\" (UniqueName: \"kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8\") pod \"network-check-target-gftlr\" (UID: \"1abeaef1-047c-4fff-a659-456e05294f94\") " pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:45.782626 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:45.782562 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 10:18:45.782626 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:45.782587 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 10:18:45.782626 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:45.782600 2569 projected.go:194] Error preparing data for projected volume kube-api-access-rttp8 for pod openshift-network-diagnostics/network-check-target-gftlr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:45.782862 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:45.782660 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8 podName:1abeaef1-047c-4fff-a659-456e05294f94 nodeName:}" failed. No retries permitted until 2026-04-17 10:18:53.782642658 +0000 UTC m=+50.441396698 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rttp8" (UniqueName: "kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8") pod "network-check-target-gftlr" (UID: "1abeaef1-047c-4fff-a659-456e05294f94") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:45.965148 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:45.964617 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:45.965148 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:45.964751 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:45.965148 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:45.965108 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:45.965490 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:45.965228 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:46.257711 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:46.257609 2569 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 10:18:47.965447 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:47.965407 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:47.965447 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:47.965425 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:47.965957 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:47.965541 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:47.965957 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:47.965668 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:49.964756 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:49.964725 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:49.964756 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:49.964740 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:49.965204 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:49.964843 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:49.965204 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:49.964986 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:51.965451 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:51.965410 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:51.965902 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:51.965462 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:51.965902 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:51.965585 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:51.965902 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:51.965727 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:53.740020 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:53.739987 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:53.740480 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:53.740137 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:53.740480 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:53.740216 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs podName:56255f22-7072-487b-8723-978c296878fb nodeName:}" failed. No retries permitted until 2026-04-17 10:19:09.740195941 +0000 UTC m=+66.398949997 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs") pod "network-metrics-daemon-z6grc" (UID: "56255f22-7072-487b-8723-978c296878fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 10:18:53.840663 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:53.840628 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rttp8\" (UniqueName: \"kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8\") pod \"network-check-target-gftlr\" (UID: \"1abeaef1-047c-4fff-a659-456e05294f94\") " pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:53.840852 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:53.840818 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 10:18:53.840852 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:53.840843 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 10:18:53.840951 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:53.840858 2569 projected.go:194] Error preparing data for projected volume kube-api-access-rttp8 for pod openshift-network-diagnostics/network-check-target-gftlr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:53.840951 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:53.840918 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8 podName:1abeaef1-047c-4fff-a659-456e05294f94 nodeName:}" failed. No retries permitted until 2026-04-17 10:19:09.840904223 +0000 UTC m=+66.499658263 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rttp8" (UniqueName: "kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8") pod "network-check-target-gftlr" (UID: "1abeaef1-047c-4fff-a659-456e05294f94") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 10:18:53.965812 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:53.965774 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:53.965985 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:53.965878 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:53.966053 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:53.965984 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:53.966152 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:53.966113 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:54.687792 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:18:54.687755 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f424dee_1a61_4c87_8a31_3c6ab909fcc4.slice/crio-be9db5db3e21e9a4e36de01ce671421d4f1f99034cc9dd0d0f0bea9945f190f2 WatchSource:0}: Error finding container be9db5db3e21e9a4e36de01ce671421d4f1f99034cc9dd0d0f0bea9945f190f2: Status 404 returned error can't find the container with id be9db5db3e21e9a4e36de01ce671421d4f1f99034cc9dd0d0f0bea9945f190f2 Apr 17 10:18:55.156255 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.156059 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-22g8b" event={"ID":"083d1f1c-be08-410d-a728-2affe73763a9","Type":"ContainerStarted","Data":"120ae2d389a39bad9c110cba1b45617931e5070a340d620cb42e1b2b6feb76e1"} Apr 17 10:18:55.157610 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.157494 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ksqkq" event={"ID":"90ac1d6e-66e2-4de9-8433-b5d2a8895e80","Type":"ContainerStarted","Data":"7ac1b159a3ec4bbde5944a7073f1e50e87ba9fd3a8828073edba8cd063739440"} Apr 17 10:18:55.158822 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.158798 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" event={"ID":"83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e","Type":"ContainerStarted","Data":"d267fdd68504eb77e25958ac14f5a8c7812f87cf1aa33762213ba16c8c713ba9"} Apr 17 10:18:55.160008 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.159989 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2t59m" event={"ID":"4cac3107-7535-4daf-bf6b-d5bf95844303","Type":"ContainerStarted","Data":"fed0dfed688388acf069ff0fb98911f8ebd37960e99ca61b196d4a288e9f8a59"} Apr 17 10:18:55.161311 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.161295 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:18:55.161628 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.161610 2569 generic.go:358] "Generic (PLEG): container finished" podID="2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5" containerID="2f5f50bcf2b6cd7ab2fe21782a1b339e51792acc988ca974cb1eb15da15e7a8c" exitCode=1 Apr 17 10:18:55.161690 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.161672 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" event={"ID":"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5","Type":"ContainerDied","Data":"2f5f50bcf2b6cd7ab2fe21782a1b339e51792acc988ca974cb1eb15da15e7a8c"} Apr 17 10:18:55.161739 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.161695 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" event={"ID":"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5","Type":"ContainerStarted","Data":"2db82fde1e8814e59ceb5d1297e03ebe731a2b06e06826329be06454ea6f8c28"} Apr 17 10:18:55.162835 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.162817 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" event={"ID":"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f","Type":"ContainerStarted","Data":"d8a5ba46c52d793d6a21f97f7670240c60fa3b80b912f941103dae0f88e21d99"} Apr 17 10:18:55.163905 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.163888 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-zm8zp" event={"ID":"b43f5610-f9dd-49c2-9de2-5c1cca09f0d6","Type":"ContainerStarted","Data":"c38cceb55618705a969d46234ba3ba260df47786d043aa808e5d380b26c2b72a"} Apr 17 10:18:55.165001 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.164982 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kwrj6" event={"ID":"db889039-4b7b-4564-b656-afd928d6bcbd","Type":"ContainerStarted","Data":"59b09e4ac68d0f0c5d776dc34a909e626e1fa30f60a39cb680328a8df2ae3a09"} Apr 17 10:18:55.165791 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.165773 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mj2c9" event={"ID":"8f424dee-1a61-4c87-8a31-3c6ab909fcc4","Type":"ContainerStarted","Data":"be9db5db3e21e9a4e36de01ce671421d4f1f99034cc9dd0d0f0bea9945f190f2"} Apr 17 10:18:55.173871 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.173827 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-22g8b" podStartSLOduration=22.387340307 podStartE2EDuration="31.173816105s" podCreationTimestamp="2026-04-17 10:18:24 +0000 UTC" firstStartedPulling="2026-04-17 10:18:38.196396054 +0000 UTC m=+34.855150096" lastFinishedPulling="2026-04-17 10:18:46.982871853 +0000 UTC m=+43.641625894" observedRunningTime="2026-04-17 10:18:55.173078642 +0000 UTC m=+51.831832704" watchObservedRunningTime="2026-04-17 10:18:55.173816105 +0000 UTC m=+51.832570166" Apr 17 10:18:55.198930 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.198852 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-hlkpd" podStartSLOduration=14.698458361 podStartE2EDuration="31.198837261s" podCreationTimestamp="2026-04-17 10:18:24 +0000 UTC" firstStartedPulling="2026-04-17 10:18:38.198726701 +0000 UTC m=+34.857480741" lastFinishedPulling="2026-04-17 10:18:54.699105587 +0000 UTC m=+51.357859641" observedRunningTime="2026-04-17 10:18:55.19808085 +0000 UTC m=+51.856834911" watchObservedRunningTime="2026-04-17 10:18:55.198837261 +0000 UTC m=+51.857591323" Apr 17 10:18:55.216261 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.216085 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kwrj6" podStartSLOduration=15.592275006 podStartE2EDuration="32.21606701s" podCreationTimestamp="2026-04-17 10:18:23 +0000 UTC" firstStartedPulling="2026-04-17 10:18:38.154620573 +0000 UTC m=+34.813374613" lastFinishedPulling="2026-04-17 10:18:54.778412562 +0000 UTC m=+51.437166617" observedRunningTime="2026-04-17 10:18:55.215513664 +0000 UTC m=+51.874267727" watchObservedRunningTime="2026-04-17 10:18:55.21606701 +0000 UTC m=+51.874821074" Apr 17 10:18:55.247470 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.246038 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-zm8zp" podStartSLOduration=15.714922611 podStartE2EDuration="32.246019757s" podCreationTimestamp="2026-04-17 10:18:23 +0000 UTC" firstStartedPulling="2026-04-17 10:18:38.158869445 +0000 UTC m=+34.817623486" lastFinishedPulling="2026-04-17 10:18:54.689966578 +0000 UTC m=+51.348720632" observedRunningTime="2026-04-17 10:18:55.231856596 +0000 UTC m=+51.890610658" watchObservedRunningTime="2026-04-17 10:18:55.246019757 +0000 UTC m=+51.904773808" Apr 17 10:18:55.273611 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.273569 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ksqkq" podStartSLOduration=14.773308123 podStartE2EDuration="31.273548975s" podCreationTimestamp="2026-04-17 10:18:24 +0000 UTC" firstStartedPulling="2026-04-17 10:18:38.190030306 +0000 UTC m=+34.848784347" lastFinishedPulling="2026-04-17 10:18:54.690271153 +0000 UTC m=+51.349025199" observedRunningTime="2026-04-17 10:18:55.247305131 +0000 UTC m=+51.906059194" watchObservedRunningTime="2026-04-17 10:18:55.273548975 +0000 UTC m=+51.932303037" Apr 17 10:18:55.965594 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.965565 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:55.965756 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:55.965683 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:55.965756 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:55.965734 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:55.965836 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:55.965820 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:56.168426 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.168400 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-vtt5p" event={"ID":"aaa97102-f10d-49b4-83af-c47d0b2cd496","Type":"ContainerStarted","Data":"10932b5e0ccfcaf9580fadd8b4bf58726a3b422bbc5cf85bbad7c4987b9fbfc4"} Apr 17 10:18:56.169744 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.169724 2569 generic.go:358] "Generic (PLEG): container finished" podID="4cac3107-7535-4daf-bf6b-d5bf95844303" containerID="fed0dfed688388acf069ff0fb98911f8ebd37960e99ca61b196d4a288e9f8a59" exitCode=0 Apr 17 10:18:56.169823 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.169788 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2t59m" event={"ID":"4cac3107-7535-4daf-bf6b-d5bf95844303","Type":"ContainerDied","Data":"fed0dfed688388acf069ff0fb98911f8ebd37960e99ca61b196d4a288e9f8a59"} Apr 17 10:18:56.172085 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.172065 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:18:56.172550 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.172526 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" event={"ID":"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5","Type":"ContainerStarted","Data":"87a493af7ee4fe4d885936e4d84c80ebe5ff468567e0e55834a926bf0f7d03ae"} Apr 17 10:18:56.172550 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.172553 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" event={"ID":"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5","Type":"ContainerStarted","Data":"f5e2269b654b8b5950df114329b58fda905284f48c5f3150b33d82ce91680df2"} Apr 17 10:18:56.172694 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.172563 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" event={"ID":"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5","Type":"ContainerStarted","Data":"9cc41c620002de60dc60975c5da45a4aec71a1010050bb877379f5ce81de6046"} Apr 17 10:18:56.206723 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.206678 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-vtt5p" podStartSLOduration=16.706496006 podStartE2EDuration="33.206660332s" podCreationTimestamp="2026-04-17 10:18:23 +0000 UTC" firstStartedPulling="2026-04-17 10:18:38.183964933 +0000 UTC m=+34.842718973" lastFinishedPulling="2026-04-17 10:18:54.684129245 +0000 UTC m=+51.342883299" observedRunningTime="2026-04-17 10:18:56.206191646 +0000 UTC m=+52.864945709" watchObservedRunningTime="2026-04-17 10:18:56.206660332 +0000 UTC m=+52.865414396" Apr 17 10:18:56.339498 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.339479 2569 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 17 10:18:56.921372 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.921226 2569 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-17T10:18:56.339494138Z","UUID":"be29ba33-0821-4eeb-bae8-9d1c0f77ad65","Handler":null,"Name":"","Endpoint":""} Apr 17 10:18:56.924925 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.924902 2569 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 17 10:18:56.924925 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:56.924929 2569 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 17 10:18:57.177667 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:57.177595 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:18:57.178350 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:57.178075 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" event={"ID":"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5","Type":"ContainerStarted","Data":"9254c96bc059f15a4fb5e3fade5f0597e2002b65662f7f3aa536092ba3fab739"} Apr 17 10:18:57.179825 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:57.179794 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" event={"ID":"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f","Type":"ContainerStarted","Data":"d97ea244798c76dc2ec2ef5319cded141a1febec7f658d1fb430f1514c6591c5"} Apr 17 10:18:57.181286 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:57.181254 2569 generic.go:358] "Generic (PLEG): container finished" podID="8f424dee-1a61-4c87-8a31-3c6ab909fcc4" containerID="72987e3601036d77abdbb5d2c91d9d794f93aacc2e0ccc9562b8826ede814906" exitCode=0 Apr 17 10:18:57.181404 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:57.181343 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mj2c9" event={"ID":"8f424dee-1a61-4c87-8a31-3c6ab909fcc4","Type":"ContainerDied","Data":"72987e3601036d77abdbb5d2c91d9d794f93aacc2e0ccc9562b8826ede814906"} Apr 17 10:18:57.965474 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:57.965445 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:57.965690 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:57.965445 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:57.965690 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:57.965564 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:57.965690 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:57.965634 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:18:58.152768 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:58.152732 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:58.153253 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:58.153232 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:58.185187 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:58.185152 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" event={"ID":"17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f","Type":"ContainerStarted","Data":"9a543fe9646b0dde799141bc4b82fe829642354bc4cabacf8523099cd29d31b6"} Apr 17 10:18:58.187194 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:58.187165 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mj2c9" event={"ID":"8f424dee-1a61-4c87-8a31-3c6ab909fcc4","Type":"ContainerStarted","Data":"8d3cb54b4e67afa2dcbb4df68d03de0c4053784966a392603aae5187661c967a"} Apr 17 10:18:58.187314 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:58.187200 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mj2c9" event={"ID":"8f424dee-1a61-4c87-8a31-3c6ab909fcc4","Type":"ContainerStarted","Data":"eecae540b7c09d3ef18788243c263ccb0e74529a4545011c97bfeb687f10bb1d"} Apr 17 10:18:58.187495 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:58.187479 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:58.187837 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:58.187819 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-zm8zp" Apr 17 10:18:58.213660 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:58.213624 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8tvl6" podStartSLOduration=16.075415462 podStartE2EDuration="35.213612875s" podCreationTimestamp="2026-04-17 10:18:23 +0000 UTC" firstStartedPulling="2026-04-17 10:18:38.168487058 +0000 UTC m=+34.827241097" lastFinishedPulling="2026-04-17 10:18:57.306684459 +0000 UTC m=+53.965438510" observedRunningTime="2026-04-17 10:18:58.199576264 +0000 UTC m=+54.858330327" watchObservedRunningTime="2026-04-17 10:18:58.213612875 +0000 UTC m=+54.872366933" Apr 17 10:18:58.229231 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:58.229149 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-mj2c9" podStartSLOduration=13.777101378 podStartE2EDuration="15.229135972s" podCreationTimestamp="2026-04-17 10:18:43 +0000 UTC" firstStartedPulling="2026-04-17 10:18:54.690496601 +0000 UTC m=+51.349250649" lastFinishedPulling="2026-04-17 10:18:56.142531187 +0000 UTC m=+52.801285243" observedRunningTime="2026-04-17 10:18:58.228780376 +0000 UTC m=+54.887534438" watchObservedRunningTime="2026-04-17 10:18:58.229135972 +0000 UTC m=+54.887890035" Apr 17 10:18:59.193045 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:59.192820 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:18:59.193512 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:59.193484 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" event={"ID":"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5","Type":"ContainerStarted","Data":"1d3152bf425f69612cd0afb003cfbbd221c30013b3d0b5478283eca6fdb0f4ef"} Apr 17 10:18:59.965033 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:59.965000 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:18:59.965228 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:59.965129 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:18:59.965228 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:18:59.965185 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:18:59.965380 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:18:59.965296 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:19:01.198749 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:01.198715 2569 generic.go:358] "Generic (PLEG): container finished" podID="4cac3107-7535-4daf-bf6b-d5bf95844303" containerID="9fe702fcb915d34d6ff158ef60d8aa6fae0c573653101279f521b21adb0553ea" exitCode=0 Apr 17 10:19:01.199232 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:01.198805 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2t59m" event={"ID":"4cac3107-7535-4daf-bf6b-d5bf95844303","Type":"ContainerDied","Data":"9fe702fcb915d34d6ff158ef60d8aa6fae0c573653101279f521b21adb0553ea"} Apr 17 10:19:01.202037 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:01.202015 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:19:01.202382 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:01.202344 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" event={"ID":"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5","Type":"ContainerStarted","Data":"e4a702d713b9e7c3adb29bb325341d32c92891f11e73a769f3207116569d4fc8"} Apr 17 10:19:01.202662 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:01.202637 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:19:01.202763 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:01.202670 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:19:01.202900 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:01.202884 2569 scope.go:117] "RemoveContainer" containerID="2f5f50bcf2b6cd7ab2fe21782a1b339e51792acc988ca974cb1eb15da15e7a8c" Apr 17 10:19:01.218300 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:01.218279 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:19:01.964879 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:01.964706 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:19:01.965030 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:01.964756 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:19:01.965030 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:19:01.964946 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:19:01.965030 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:19:01.965018 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:19:02.208978 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:02.208952 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:19:02.209321 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:02.209230 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" event={"ID":"2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5","Type":"ContainerStarted","Data":"6ac670aa6fd9d4a3c3991bf34848145c795daeedf126bbea85ee526bd3f7703a"} Apr 17 10:19:02.209585 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:02.209567 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:19:02.223557 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:02.223489 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:19:02.238971 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:02.238932 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" podStartSLOduration=22.551056694 podStartE2EDuration="39.238918511s" podCreationTimestamp="2026-04-17 10:18:23 +0000 UTC" firstStartedPulling="2026-04-17 10:18:38.173658952 +0000 UTC m=+34.832412995" lastFinishedPulling="2026-04-17 10:18:54.861520759 +0000 UTC m=+51.520274812" observedRunningTime="2026-04-17 10:19:02.237733339 +0000 UTC m=+58.896487402" watchObservedRunningTime="2026-04-17 10:19:02.238918511 +0000 UTC m=+58.897672572" Apr 17 10:19:03.213274 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:03.213248 2569 generic.go:358] "Generic (PLEG): container finished" podID="4cac3107-7535-4daf-bf6b-d5bf95844303" containerID="73872e6b06e82ee92e0d43048bf32b552269ff4250a1c31d1182fe9a2e30f6a9" exitCode=0 Apr 17 10:19:03.213733 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:03.213324 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2t59m" event={"ID":"4cac3107-7535-4daf-bf6b-d5bf95844303","Type":"ContainerDied","Data":"73872e6b06e82ee92e0d43048bf32b552269ff4250a1c31d1182fe9a2e30f6a9"} Apr 17 10:19:03.351430 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:03.351332 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-gftlr"] Apr 17 10:19:03.351614 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:03.351520 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:19:03.351689 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:19:03.351633 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:19:03.354691 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:03.354608 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z6grc"] Apr 17 10:19:03.354815 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:03.354729 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:19:03.354871 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:19:03.354846 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:19:04.965015 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:04.964800 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:19:04.965606 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:04.964838 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:19:04.965606 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:19:04.965126 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:19:04.965606 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:19:04.965167 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:19:05.218661 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:05.218580 2569 generic.go:358] "Generic (PLEG): container finished" podID="4cac3107-7535-4daf-bf6b-d5bf95844303" containerID="b88c1fba1b90e4a270b9320d271a3549396899578666d6e9a399dc8bc0aeaa5a" exitCode=0 Apr 17 10:19:05.218661 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:05.218642 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2t59m" event={"ID":"4cac3107-7535-4daf-bf6b-d5bf95844303","Type":"ContainerDied","Data":"b88c1fba1b90e4a270b9320d271a3549396899578666d6e9a399dc8bc0aeaa5a"} Apr 17 10:19:06.965080 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:06.965026 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:19:06.965522 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:06.965044 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:19:06.965522 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:19:06.965167 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-gftlr" podUID="1abeaef1-047c-4fff-a659-456e05294f94" Apr 17 10:19:06.965522 ip-10-0-136-48 kubenswrapper[2569]: E0417 10:19:06.965228 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z6grc" podUID="56255f22-7072-487b-8723-978c296878fb" Apr 17 10:19:07.684504 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.684471 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-136-48.ec2.internal" event="NodeReady" Apr 17 10:19:07.684678 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.684619 2569 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 17 10:19:07.740732 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.740694 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cnksf"] Apr 17 10:19:07.771909 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.771833 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-mj7qr"] Apr 17 10:19:07.772062 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.772015 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cnksf" Apr 17 10:19:07.774658 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.774631 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 17 10:19:07.774815 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.774675 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 17 10:19:07.774815 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.774632 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-xs5df\"" Apr 17 10:19:07.775082 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.775065 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 17 10:19:07.799721 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.799690 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cnksf"] Apr 17 10:19:07.799843 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.799732 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-mj7qr"] Apr 17 10:19:07.799843 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.799780 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.802503 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.802485 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 17 10:19:07.802626 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.802511 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-7l8hg\"" Apr 17 10:19:07.802626 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.802546 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 17 10:19:07.802626 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.802603 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 17 10:19:07.802626 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.802604 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 17 10:19:07.839500 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.839472 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mm9sd"] Apr 17 10:19:07.854560 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.854533 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/2e326b9d-2472-46f0-9332-42095d7aac7f-crio-socket\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.854698 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.854609 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh8tv\" (UniqueName: \"kubernetes.io/projected/42a98ea5-8626-48d5-bb6b-80eb251d2e33-kube-api-access-lh8tv\") pod \"ingress-canary-cnksf\" (UID: \"42a98ea5-8626-48d5-bb6b-80eb251d2e33\") " pod="openshift-ingress-canary/ingress-canary-cnksf" Apr 17 10:19:07.854781 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.854719 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/2e326b9d-2472-46f0-9332-42095d7aac7f-data-volume\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.854781 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.854751 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/2e326b9d-2472-46f0-9332-42095d7aac7f-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.854901 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.854861 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/2e326b9d-2472-46f0-9332-42095d7aac7f-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.854956 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.854908 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbwzc\" (UniqueName: \"kubernetes.io/projected/2e326b9d-2472-46f0-9332-42095d7aac7f-kube-api-access-fbwzc\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.854956 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.854940 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42a98ea5-8626-48d5-bb6b-80eb251d2e33-cert\") pod \"ingress-canary-cnksf\" (UID: \"42a98ea5-8626-48d5-bb6b-80eb251d2e33\") " pod="openshift-ingress-canary/ingress-canary-cnksf" Apr 17 10:19:07.858237 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.858216 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mm9sd"] Apr 17 10:19:07.858391 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.858341 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:07.860917 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.860875 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 17 10:19:07.860917 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.860902 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 17 10:19:07.861086 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.860909 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kq7pt\"" Apr 17 10:19:07.956220 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956190 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fbwzc\" (UniqueName: \"kubernetes.io/projected/2e326b9d-2472-46f0-9332-42095d7aac7f-kube-api-access-fbwzc\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.956399 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956227 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42a98ea5-8626-48d5-bb6b-80eb251d2e33-cert\") pod \"ingress-canary-cnksf\" (UID: \"42a98ea5-8626-48d5-bb6b-80eb251d2e33\") " pod="openshift-ingress-canary/ingress-canary-cnksf" Apr 17 10:19:07.956399 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956261 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdgn\" (UniqueName: \"kubernetes.io/projected/4d2301e1-52d7-4d39-9acf-d767734726a3-kube-api-access-ngdgn\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:07.956399 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956312 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/2e326b9d-2472-46f0-9332-42095d7aac7f-crio-socket\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.956399 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956336 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lh8tv\" (UniqueName: \"kubernetes.io/projected/42a98ea5-8626-48d5-bb6b-80eb251d2e33-kube-api-access-lh8tv\") pod \"ingress-canary-cnksf\" (UID: \"42a98ea5-8626-48d5-bb6b-80eb251d2e33\") " pod="openshift-ingress-canary/ingress-canary-cnksf" Apr 17 10:19:07.956399 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956386 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/2e326b9d-2472-46f0-9332-42095d7aac7f-data-volume\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.956619 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956415 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/2e326b9d-2472-46f0-9332-42095d7aac7f-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.956619 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956441 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2301e1-52d7-4d39-9acf-d767734726a3-config-volume\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:07.956619 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956462 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d2301e1-52d7-4d39-9acf-d767734726a3-metrics-tls\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:07.956619 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956491 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4d2301e1-52d7-4d39-9acf-d767734726a3-tmp-dir\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:07.956619 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956541 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/2e326b9d-2472-46f0-9332-42095d7aac7f-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.957378 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.956891 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/2e326b9d-2472-46f0-9332-42095d7aac7f-data-volume\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.957378 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.957032 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/2e326b9d-2472-46f0-9332-42095d7aac7f-crio-socket\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.957378 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.957160 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/2e326b9d-2472-46f0-9332-42095d7aac7f-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.961268 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.961243 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/2e326b9d-2472-46f0-9332-42095d7aac7f-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.961446 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.961431 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42a98ea5-8626-48d5-bb6b-80eb251d2e33-cert\") pod \"ingress-canary-cnksf\" (UID: \"42a98ea5-8626-48d5-bb6b-80eb251d2e33\") " pod="openshift-ingress-canary/ingress-canary-cnksf" Apr 17 10:19:07.965021 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.964996 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbwzc\" (UniqueName: \"kubernetes.io/projected/2e326b9d-2472-46f0-9332-42095d7aac7f-kube-api-access-fbwzc\") pod \"insights-runtime-extractor-mj7qr\" (UID: \"2e326b9d-2472-46f0-9332-42095d7aac7f\") " pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:07.965743 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:07.965723 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh8tv\" (UniqueName: \"kubernetes.io/projected/42a98ea5-8626-48d5-bb6b-80eb251d2e33-kube-api-access-lh8tv\") pod \"ingress-canary-cnksf\" (UID: \"42a98ea5-8626-48d5-bb6b-80eb251d2e33\") " pod="openshift-ingress-canary/ingress-canary-cnksf" Apr 17 10:19:08.057620 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.057589 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ngdgn\" (UniqueName: \"kubernetes.io/projected/4d2301e1-52d7-4d39-9acf-d767734726a3-kube-api-access-ngdgn\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:08.057812 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.057666 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2301e1-52d7-4d39-9acf-d767734726a3-config-volume\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:08.057812 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.057690 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d2301e1-52d7-4d39-9acf-d767734726a3-metrics-tls\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:08.057812 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.057717 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4d2301e1-52d7-4d39-9acf-d767734726a3-tmp-dir\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:08.058108 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.058084 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4d2301e1-52d7-4d39-9acf-d767734726a3-tmp-dir\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:08.058438 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.058412 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2301e1-52d7-4d39-9acf-d767734726a3-config-volume\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:08.060408 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.060386 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d2301e1-52d7-4d39-9acf-d767734726a3-metrics-tls\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:08.065514 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.065489 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngdgn\" (UniqueName: \"kubernetes.io/projected/4d2301e1-52d7-4d39-9acf-d767734726a3-kube-api-access-ngdgn\") pod \"dns-default-mm9sd\" (UID: \"4d2301e1-52d7-4d39-9acf-d767734726a3\") " pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:08.082407 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.082378 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cnksf" Apr 17 10:19:08.109425 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.109389 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-mj7qr" Apr 17 10:19:08.168840 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.168803 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:08.277796 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.277768 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cnksf"] Apr 17 10:19:08.280697 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.280604 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-mj7qr"] Apr 17 10:19:08.283755 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:19:08.283725 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42a98ea5_8626_48d5_bb6b_80eb251d2e33.slice/crio-cce9d334a071a449e8fbc4aebb1be2be67df358361220a3561f6fa3e6982cc32 WatchSource:0}: Error finding container cce9d334a071a449e8fbc4aebb1be2be67df358361220a3561f6fa3e6982cc32: Status 404 returned error can't find the container with id cce9d334a071a449e8fbc4aebb1be2be67df358361220a3561f6fa3e6982cc32 Apr 17 10:19:08.285312 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:19:08.285275 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e326b9d_2472_46f0_9332_42095d7aac7f.slice/crio-3bdfb67e04b5e5d4ab32fe5b9d719c33d335e851ca956df760ba44238afa6811 WatchSource:0}: Error finding container 3bdfb67e04b5e5d4ab32fe5b9d719c33d335e851ca956df760ba44238afa6811: Status 404 returned error can't find the container with id 3bdfb67e04b5e5d4ab32fe5b9d719c33d335e851ca956df760ba44238afa6811 Apr 17 10:19:08.311817 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.311764 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mm9sd"] Apr 17 10:19:08.321183 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:19:08.321148 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d2301e1_52d7_4d39_9acf_d767734726a3.slice/crio-3b55efe4651a07128dfff1e8c0426aea87332279e11c15cfb92f7a4f0437ad8d WatchSource:0}: Error finding container 3b55efe4651a07128dfff1e8c0426aea87332279e11c15cfb92f7a4f0437ad8d: Status 404 returned error can't find the container with id 3b55efe4651a07128dfff1e8c0426aea87332279e11c15cfb92f7a4f0437ad8d Apr 17 10:19:08.965116 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.965076 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:19:08.965576 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.965560 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:19:08.969658 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.969619 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-k4g8r\"" Apr 17 10:19:08.970030 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.969858 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 17 10:19:08.970030 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.969892 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-lxdvh\"" Apr 17 10:19:08.970107 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.970061 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 17 10:19:08.970160 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:08.970133 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 17 10:19:09.232095 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:09.232001 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mm9sd" event={"ID":"4d2301e1-52d7-4d39-9acf-d767734726a3","Type":"ContainerStarted","Data":"3b55efe4651a07128dfff1e8c0426aea87332279e11c15cfb92f7a4f0437ad8d"} Apr 17 10:19:09.233273 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:09.233239 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cnksf" event={"ID":"42a98ea5-8626-48d5-bb6b-80eb251d2e33","Type":"ContainerStarted","Data":"cce9d334a071a449e8fbc4aebb1be2be67df358361220a3561f6fa3e6982cc32"} Apr 17 10:19:09.235024 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:09.234999 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mj7qr" event={"ID":"2e326b9d-2472-46f0-9332-42095d7aac7f","Type":"ContainerStarted","Data":"35530a9671b1af11258b9f0a792646bc1087974a18ee816b721bee6fe81f1f59"} Apr 17 10:19:09.235155 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:09.235030 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mj7qr" event={"ID":"2e326b9d-2472-46f0-9332-42095d7aac7f","Type":"ContainerStarted","Data":"3bdfb67e04b5e5d4ab32fe5b9d719c33d335e851ca956df760ba44238afa6811"} Apr 17 10:19:09.777840 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:09.775154 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:19:09.780312 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:09.780282 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56255f22-7072-487b-8723-978c296878fb-metrics-certs\") pod \"network-metrics-daemon-z6grc\" (UID: \"56255f22-7072-487b-8723-978c296878fb\") " pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:19:09.876120 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:09.876078 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rttp8\" (UniqueName: \"kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8\") pod \"network-check-target-gftlr\" (UID: \"1abeaef1-047c-4fff-a659-456e05294f94\") " pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:19:09.879074 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:09.879052 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rttp8\" (UniqueName: \"kubernetes.io/projected/1abeaef1-047c-4fff-a659-456e05294f94-kube-api-access-rttp8\") pod \"network-check-target-gftlr\" (UID: \"1abeaef1-047c-4fff-a659-456e05294f94\") " pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:19:09.889644 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:09.889622 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:19:09.897271 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:09.897250 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z6grc" Apr 17 10:19:11.133342 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:11.133313 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-gftlr"] Apr 17 10:19:11.138427 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:11.138405 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z6grc"] Apr 17 10:19:11.246421 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:19:11.246393 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1abeaef1_047c_4fff_a659_456e05294f94.slice/crio-4e2ba5b81716a36c54e9e9d163973b8777201234da0f0a262f7b1872c0eca92d WatchSource:0}: Error finding container 4e2ba5b81716a36c54e9e9d163973b8777201234da0f0a262f7b1872c0eca92d: Status 404 returned error can't find the container with id 4e2ba5b81716a36c54e9e9d163973b8777201234da0f0a262f7b1872c0eca92d Apr 17 10:19:11.258135 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:19:11.258106 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56255f22_7072_487b_8723_978c296878fb.slice/crio-bb65b1e28b7781002a231764f73bafd3b426844767e54273e9a89ac4b337d9a4 WatchSource:0}: Error finding container bb65b1e28b7781002a231764f73bafd3b426844767e54273e9a89ac4b337d9a4: Status 404 returned error can't find the container with id bb65b1e28b7781002a231764f73bafd3b426844767e54273e9a89ac4b337d9a4 Apr 17 10:19:12.243847 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:12.243602 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-gftlr" event={"ID":"1abeaef1-047c-4fff-a659-456e05294f94","Type":"ContainerStarted","Data":"4e2ba5b81716a36c54e9e9d163973b8777201234da0f0a262f7b1872c0eca92d"} Apr 17 10:19:12.244615 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:12.244588 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z6grc" event={"ID":"56255f22-7072-487b-8723-978c296878fb","Type":"ContainerStarted","Data":"bb65b1e28b7781002a231764f73bafd3b426844767e54273e9a89ac4b337d9a4"} Apr 17 10:19:13.250619 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:13.250581 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cnksf" event={"ID":"42a98ea5-8626-48d5-bb6b-80eb251d2e33","Type":"ContainerStarted","Data":"36073fc239ce425c76954d454c2c74f84db1619a2ea0f8506c087fdb957cce1d"} Apr 17 10:19:13.252773 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:13.252731 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mj7qr" event={"ID":"2e326b9d-2472-46f0-9332-42095d7aac7f","Type":"ContainerStarted","Data":"a8d33b77b288aca9bbd272dab271fec6f07c40c35ed2e1f4c7a09d07d5e4ed0b"} Apr 17 10:19:13.255940 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:13.255908 2569 generic.go:358] "Generic (PLEG): container finished" podID="4cac3107-7535-4daf-bf6b-d5bf95844303" containerID="652f52a9962d877892ac5d4ba687907afbf8a4309e373a9624144aec23b04d3b" exitCode=0 Apr 17 10:19:13.256084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:13.256061 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2t59m" event={"ID":"4cac3107-7535-4daf-bf6b-d5bf95844303","Type":"ContainerDied","Data":"652f52a9962d877892ac5d4ba687907afbf8a4309e373a9624144aec23b04d3b"} Apr 17 10:19:13.258437 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:13.258415 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mm9sd" event={"ID":"4d2301e1-52d7-4d39-9acf-d767734726a3","Type":"ContainerStarted","Data":"e9dadc19b6c6e5bc318ceda35e055c9856e2bda3274817ddd9b315346212585b"} Apr 17 10:19:13.258576 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:13.258560 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mm9sd" event={"ID":"4d2301e1-52d7-4d39-9acf-d767734726a3","Type":"ContainerStarted","Data":"f1906e43e0476f5d95bd57442e366efd556d64476625f8c6556bbb6f94c62686"} Apr 17 10:19:13.258692 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:13.258674 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:13.267124 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:13.267076 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cnksf" podStartSLOduration=1.7769445529999999 podStartE2EDuration="6.267062143s" podCreationTimestamp="2026-04-17 10:19:07 +0000 UTC" firstStartedPulling="2026-04-17 10:19:08.286304808 +0000 UTC m=+64.945058862" lastFinishedPulling="2026-04-17 10:19:12.776422398 +0000 UTC m=+69.435176452" observedRunningTime="2026-04-17 10:19:13.265828196 +0000 UTC m=+69.924582259" watchObservedRunningTime="2026-04-17 10:19:13.267062143 +0000 UTC m=+69.925816204" Apr 17 10:19:13.310119 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:13.310073 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mm9sd" podStartSLOduration=1.854985874 podStartE2EDuration="6.310062215s" podCreationTimestamp="2026-04-17 10:19:07 +0000 UTC" firstStartedPulling="2026-04-17 10:19:08.322911402 +0000 UTC m=+64.981665442" lastFinishedPulling="2026-04-17 10:19:12.777987729 +0000 UTC m=+69.436741783" observedRunningTime="2026-04-17 10:19:13.308997062 +0000 UTC m=+69.967751129" watchObservedRunningTime="2026-04-17 10:19:13.310062215 +0000 UTC m=+69.968816277" Apr 17 10:19:14.264934 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:14.264754 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2t59m" event={"ID":"4cac3107-7535-4daf-bf6b-d5bf95844303","Type":"ContainerStarted","Data":"7500a6710d479e23af3f79b1aa6c48012df6a01e29699a26d8d7bb44fc80367d"} Apr 17 10:19:14.266324 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:14.266291 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z6grc" event={"ID":"56255f22-7072-487b-8723-978c296878fb","Type":"ContainerStarted","Data":"05c27278d62af5ec0c43ab73fa28e3b9df0729ee8dd104ea966b88043ed2c660"} Apr 17 10:19:15.274551 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:15.274454 2569 generic.go:358] "Generic (PLEG): container finished" podID="4cac3107-7535-4daf-bf6b-d5bf95844303" containerID="7500a6710d479e23af3f79b1aa6c48012df6a01e29699a26d8d7bb44fc80367d" exitCode=0 Apr 17 10:19:15.274551 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:15.274509 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2t59m" event={"ID":"4cac3107-7535-4daf-bf6b-d5bf95844303","Type":"ContainerDied","Data":"7500a6710d479e23af3f79b1aa6c48012df6a01e29699a26d8d7bb44fc80367d"} Apr 17 10:19:15.276096 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:15.276074 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-gftlr" event={"ID":"1abeaef1-047c-4fff-a659-456e05294f94","Type":"ContainerStarted","Data":"818b6836268bc7b6bc7c1e0e7945739bc3d8e3611a6f7f1f237aa0a54cd934a3"} Apr 17 10:19:15.276196 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:15.276178 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:19:15.278221 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:15.277660 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z6grc" event={"ID":"56255f22-7072-487b-8723-978c296878fb","Type":"ContainerStarted","Data":"68c30ddbcb12a20b013467321ab71e472fbcad6dbea847e027efed71567e4f3b"} Apr 17 10:19:15.313425 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:15.313349 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-z6grc" podStartSLOduration=49.01892388 podStartE2EDuration="51.313332791s" podCreationTimestamp="2026-04-17 10:18:24 +0000 UTC" firstStartedPulling="2026-04-17 10:19:11.286820852 +0000 UTC m=+67.945574893" lastFinishedPulling="2026-04-17 10:19:13.581229748 +0000 UTC m=+70.239983804" observedRunningTime="2026-04-17 10:19:15.311660102 +0000 UTC m=+71.970414165" watchObservedRunningTime="2026-04-17 10:19:15.313332791 +0000 UTC m=+71.972086869" Apr 17 10:19:15.330446 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:15.330393 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-gftlr" podStartSLOduration=47.421635351 podStartE2EDuration="51.330377767s" podCreationTimestamp="2026-04-17 10:18:24 +0000 UTC" firstStartedPulling="2026-04-17 10:19:11.248307134 +0000 UTC m=+67.907061174" lastFinishedPulling="2026-04-17 10:19:15.157049531 +0000 UTC m=+71.815803590" observedRunningTime="2026-04-17 10:19:15.329525214 +0000 UTC m=+71.988279276" watchObservedRunningTime="2026-04-17 10:19:15.330377767 +0000 UTC m=+71.989131827" Apr 17 10:19:16.281941 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:16.281906 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mj7qr" event={"ID":"2e326b9d-2472-46f0-9332-42095d7aac7f","Type":"ContainerStarted","Data":"de211f5d376f269d58eb23392831b7f347c52c67e71f2b1f779d88a6a49fedaa"} Apr 17 10:19:16.284994 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:16.284962 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2t59m" event={"ID":"4cac3107-7535-4daf-bf6b-d5bf95844303","Type":"ContainerStarted","Data":"9ecc5f911f7bf3fb531fae2742ebb00c55d7aaf55b4e777b37411b1eb10bd39a"} Apr 17 10:19:16.299145 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:16.299083 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-mj7qr" podStartSLOduration=2.499652392 podStartE2EDuration="9.299066576s" podCreationTimestamp="2026-04-17 10:19:07 +0000 UTC" firstStartedPulling="2026-04-17 10:19:08.396506062 +0000 UTC m=+65.055260118" lastFinishedPulling="2026-04-17 10:19:15.195920247 +0000 UTC m=+71.854674302" observedRunningTime="2026-04-17 10:19:16.297862907 +0000 UTC m=+72.956616969" watchObservedRunningTime="2026-04-17 10:19:16.299066576 +0000 UTC m=+72.957820637" Apr 17 10:19:16.316206 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:16.316167 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2t59m" podStartSLOduration=18.841269298 podStartE2EDuration="53.316155766s" podCreationTimestamp="2026-04-17 10:18:23 +0000 UTC" firstStartedPulling="2026-04-17 10:18:38.178061336 +0000 UTC m=+34.836815375" lastFinishedPulling="2026-04-17 10:19:12.6529478 +0000 UTC m=+69.311701843" observedRunningTime="2026-04-17 10:19:16.315795662 +0000 UTC m=+72.974549724" watchObservedRunningTime="2026-04-17 10:19:16.316155766 +0000 UTC m=+72.974909828" Apr 17 10:19:23.268440 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:23.268407 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mm9sd" Apr 17 10:19:34.232814 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:34.232788 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qshzm" Apr 17 10:19:46.287560 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:19:46.287529 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-gftlr" Apr 17 10:20:32.089763 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.089726 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-lmj77"] Apr 17 10:20:32.092546 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.092517 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.097192 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.097174 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 17 10:20:32.112961 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.112938 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-lmj77"] Apr 17 10:20:32.172122 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.172094 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/86b2b97b-7175-48cb-822e-123ce9badea3-original-pull-secret\") pod \"global-pull-secret-syncer-lmj77\" (UID: \"86b2b97b-7175-48cb-822e-123ce9badea3\") " pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.172254 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.172133 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/86b2b97b-7175-48cb-822e-123ce9badea3-kubelet-config\") pod \"global-pull-secret-syncer-lmj77\" (UID: \"86b2b97b-7175-48cb-822e-123ce9badea3\") " pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.172254 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.172158 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/86b2b97b-7175-48cb-822e-123ce9badea3-dbus\") pod \"global-pull-secret-syncer-lmj77\" (UID: \"86b2b97b-7175-48cb-822e-123ce9badea3\") " pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.272950 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.272922 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/86b2b97b-7175-48cb-822e-123ce9badea3-original-pull-secret\") pod \"global-pull-secret-syncer-lmj77\" (UID: \"86b2b97b-7175-48cb-822e-123ce9badea3\") " pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.273078 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.272959 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/86b2b97b-7175-48cb-822e-123ce9badea3-kubelet-config\") pod \"global-pull-secret-syncer-lmj77\" (UID: \"86b2b97b-7175-48cb-822e-123ce9badea3\") " pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.273078 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.272984 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/86b2b97b-7175-48cb-822e-123ce9badea3-dbus\") pod \"global-pull-secret-syncer-lmj77\" (UID: \"86b2b97b-7175-48cb-822e-123ce9badea3\") " pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.273078 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.273056 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/86b2b97b-7175-48cb-822e-123ce9badea3-kubelet-config\") pod \"global-pull-secret-syncer-lmj77\" (UID: \"86b2b97b-7175-48cb-822e-123ce9badea3\") " pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.273192 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.273115 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/86b2b97b-7175-48cb-822e-123ce9badea3-dbus\") pod \"global-pull-secret-syncer-lmj77\" (UID: \"86b2b97b-7175-48cb-822e-123ce9badea3\") " pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.276239 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.276221 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/86b2b97b-7175-48cb-822e-123ce9badea3-original-pull-secret\") pod \"global-pull-secret-syncer-lmj77\" (UID: \"86b2b97b-7175-48cb-822e-123ce9badea3\") " pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.401475 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.401410 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-lmj77" Apr 17 10:20:32.533533 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:32.533502 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-lmj77"] Apr 17 10:20:32.536943 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:20:32.536919 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86b2b97b_7175_48cb_822e_123ce9badea3.slice/crio-d3cd00de697c29f2896df0e2dcf2beb66090eb2279887d7c1f770618626a88cd WatchSource:0}: Error finding container d3cd00de697c29f2896df0e2dcf2beb66090eb2279887d7c1f770618626a88cd: Status 404 returned error can't find the container with id d3cd00de697c29f2896df0e2dcf2beb66090eb2279887d7c1f770618626a88cd Apr 17 10:20:33.490045 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:33.490002 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-lmj77" event={"ID":"86b2b97b-7175-48cb-822e-123ce9badea3","Type":"ContainerStarted","Data":"d3cd00de697c29f2896df0e2dcf2beb66090eb2279887d7c1f770618626a88cd"} Apr 17 10:20:36.499074 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:36.499045 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-lmj77" event={"ID":"86b2b97b-7175-48cb-822e-123ce9badea3","Type":"ContainerStarted","Data":"2ace45a9ea48023abe66657573fb709245f5d1f2de5705c2e8ec7b8b7cf22354"} Apr 17 10:20:36.512715 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:36.512653 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-lmj77" podStartSLOduration=0.665300813 podStartE2EDuration="4.51263771s" podCreationTimestamp="2026-04-17 10:20:32 +0000 UTC" firstStartedPulling="2026-04-17 10:20:32.538426781 +0000 UTC m=+149.197180826" lastFinishedPulling="2026-04-17 10:20:36.38576368 +0000 UTC m=+153.044517723" observedRunningTime="2026-04-17 10:20:36.51248704 +0000 UTC m=+153.171241104" watchObservedRunningTime="2026-04-17 10:20:36.51263771 +0000 UTC m=+153.171391777" Apr 17 10:20:58.210596 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.210514 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr"] Apr 17 10:20:58.213317 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.213302 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" Apr 17 10:20:58.215823 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.215804 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 17 10:20:58.216349 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.216335 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 17 10:20:58.216569 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.216555 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 17 10:20:58.216891 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.216872 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 17 10:20:58.217584 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.217546 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-d68dv\"" Apr 17 10:20:58.225784 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.225762 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr"] Apr 17 10:20:58.225972 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.225945 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p6b2\" (UniqueName: \"kubernetes.io/projected/503b82d6-10b4-4a66-92d0-39f16a2ca8e6-kube-api-access-5p6b2\") pod \"managed-serviceaccount-addon-agent-57b6c9f54f-7vstr\" (UID: \"503b82d6-10b4-4a66-92d0-39f16a2ca8e6\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" Apr 17 10:20:58.226082 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.226067 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/503b82d6-10b4-4a66-92d0-39f16a2ca8e6-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-57b6c9f54f-7vstr\" (UID: \"503b82d6-10b4-4a66-92d0-39f16a2ca8e6\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" Apr 17 10:20:58.297216 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.297194 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w"] Apr 17 10:20:58.300110 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.300095 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.303240 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.303222 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-service-proxy-server-certificates\"" Apr 17 10:20:58.303332 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.303285 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert\"" Apr 17 10:20:58.303416 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.303347 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-ca\"" Apr 17 10:20:58.303416 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.303392 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-hub-kubeconfig\"" Apr 17 10:20:58.312285 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.312265 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w"] Apr 17 10:20:58.327375 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.327332 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5p6b2\" (UniqueName: \"kubernetes.io/projected/503b82d6-10b4-4a66-92d0-39f16a2ca8e6-kube-api-access-5p6b2\") pod \"managed-serviceaccount-addon-agent-57b6c9f54f-7vstr\" (UID: \"503b82d6-10b4-4a66-92d0-39f16a2ca8e6\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" Apr 17 10:20:58.327473 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.327398 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.327473 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.327434 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/503b82d6-10b4-4a66-92d0-39f16a2ca8e6-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-57b6c9f54f-7vstr\" (UID: \"503b82d6-10b4-4a66-92d0-39f16a2ca8e6\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" Apr 17 10:20:58.327588 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.327472 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.327588 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.327498 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/d477de2b-c2e7-4b15-a142-770a4afaa97a-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.327588 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.327538 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs587\" (UniqueName: \"kubernetes.io/projected/d477de2b-c2e7-4b15-a142-770a4afaa97a-kube-api-access-zs587\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.327705 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.327596 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-hub\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.327705 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.327625 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-ca\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.329732 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.329713 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/503b82d6-10b4-4a66-92d0-39f16a2ca8e6-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-57b6c9f54f-7vstr\" (UID: \"503b82d6-10b4-4a66-92d0-39f16a2ca8e6\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" Apr 17 10:20:58.336493 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.336466 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p6b2\" (UniqueName: \"kubernetes.io/projected/503b82d6-10b4-4a66-92d0-39f16a2ca8e6-kube-api-access-5p6b2\") pod \"managed-serviceaccount-addon-agent-57b6c9f54f-7vstr\" (UID: \"503b82d6-10b4-4a66-92d0-39f16a2ca8e6\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" Apr 17 10:20:58.385712 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.385687 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn"] Apr 17 10:20:58.388497 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.388483 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.391540 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.391523 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 17 10:20:58.403527 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.403500 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn"] Apr 17 10:20:58.428846 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.428828 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/d477de2b-c2e7-4b15-a142-770a4afaa97a-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.428948 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.428860 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zs587\" (UniqueName: \"kubernetes.io/projected/d477de2b-c2e7-4b15-a142-770a4afaa97a-kube-api-access-zs587\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.428948 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.428882 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-hub\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.428948 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.428906 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g7ks\" (UniqueName: \"kubernetes.io/projected/a218800f-9f30-4cdf-8abe-593bb2f2a0ff-kube-api-access-5g7ks\") pod \"klusterlet-addon-workmgr-6fcd7f5db4-w8hkn\" (UID: \"a218800f-9f30-4cdf-8abe-593bb2f2a0ff\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.428948 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.428939 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-ca\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.429169 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.429072 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a218800f-9f30-4cdf-8abe-593bb2f2a0ff-tmp\") pod \"klusterlet-addon-workmgr-6fcd7f5db4-w8hkn\" (UID: \"a218800f-9f30-4cdf-8abe-593bb2f2a0ff\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.429169 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.429101 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/a218800f-9f30-4cdf-8abe-593bb2f2a0ff-klusterlet-config\") pod \"klusterlet-addon-workmgr-6fcd7f5db4-w8hkn\" (UID: \"a218800f-9f30-4cdf-8abe-593bb2f2a0ff\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.429169 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.429151 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.429322 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.429184 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.429612 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.429589 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/d477de2b-c2e7-4b15-a142-770a4afaa97a-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.431172 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.431147 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-ca\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.431370 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.431336 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.431490 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.431471 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.431490 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.431479 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub\" (UniqueName: \"kubernetes.io/secret/d477de2b-c2e7-4b15-a142-770a4afaa97a-hub\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.438263 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.438241 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs587\" (UniqueName: \"kubernetes.io/projected/d477de2b-c2e7-4b15-a142-770a4afaa97a-kube-api-access-zs587\") pod \"cluster-proxy-proxy-agent-654fcd74fb-h584w\" (UID: \"d477de2b-c2e7-4b15-a142-770a4afaa97a\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.530303 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.530281 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5g7ks\" (UniqueName: \"kubernetes.io/projected/a218800f-9f30-4cdf-8abe-593bb2f2a0ff-kube-api-access-5g7ks\") pod \"klusterlet-addon-workmgr-6fcd7f5db4-w8hkn\" (UID: \"a218800f-9f30-4cdf-8abe-593bb2f2a0ff\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.530425 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.530316 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a218800f-9f30-4cdf-8abe-593bb2f2a0ff-tmp\") pod \"klusterlet-addon-workmgr-6fcd7f5db4-w8hkn\" (UID: \"a218800f-9f30-4cdf-8abe-593bb2f2a0ff\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.530425 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.530343 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/a218800f-9f30-4cdf-8abe-593bb2f2a0ff-klusterlet-config\") pod \"klusterlet-addon-workmgr-6fcd7f5db4-w8hkn\" (UID: \"a218800f-9f30-4cdf-8abe-593bb2f2a0ff\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.530638 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.530613 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a218800f-9f30-4cdf-8abe-593bb2f2a0ff-tmp\") pod \"klusterlet-addon-workmgr-6fcd7f5db4-w8hkn\" (UID: \"a218800f-9f30-4cdf-8abe-593bb2f2a0ff\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.532305 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.532286 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/a218800f-9f30-4cdf-8abe-593bb2f2a0ff-klusterlet-config\") pod \"klusterlet-addon-workmgr-6fcd7f5db4-w8hkn\" (UID: \"a218800f-9f30-4cdf-8abe-593bb2f2a0ff\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.537134 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.537120 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" Apr 17 10:20:58.539265 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.539250 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g7ks\" (UniqueName: \"kubernetes.io/projected/a218800f-9f30-4cdf-8abe-593bb2f2a0ff-kube-api-access-5g7ks\") pod \"klusterlet-addon-workmgr-6fcd7f5db4-w8hkn\" (UID: \"a218800f-9f30-4cdf-8abe-593bb2f2a0ff\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.609416 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.608749 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" Apr 17 10:20:58.650189 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.650160 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr"] Apr 17 10:20:58.653749 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:20:58.653713 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod503b82d6_10b4_4a66_92d0_39f16a2ca8e6.slice/crio-fc8d24ea6e9bc3ffecc5632abfb2828b00f76ddc7e1a8e5b6b7727680131bf57 WatchSource:0}: Error finding container fc8d24ea6e9bc3ffecc5632abfb2828b00f76ddc7e1a8e5b6b7727680131bf57: Status 404 returned error can't find the container with id fc8d24ea6e9bc3ffecc5632abfb2828b00f76ddc7e1a8e5b6b7727680131bf57 Apr 17 10:20:58.707663 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.707638 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:20:58.724548 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.724514 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w"] Apr 17 10:20:58.728395 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:20:58.728371 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd477de2b_c2e7_4b15_a142_770a4afaa97a.slice/crio-169b96b96eaaec9f5dbed56f9bcc62c7566fc2296263aa7db4e1a054dc86b9ba WatchSource:0}: Error finding container 169b96b96eaaec9f5dbed56f9bcc62c7566fc2296263aa7db4e1a054dc86b9ba: Status 404 returned error can't find the container with id 169b96b96eaaec9f5dbed56f9bcc62c7566fc2296263aa7db4e1a054dc86b9ba Apr 17 10:20:58.821289 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:58.821227 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn"] Apr 17 10:20:58.824646 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:20:58.824626 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda218800f_9f30_4cdf_8abe_593bb2f2a0ff.slice/crio-f2df5ad4e98d83d10122b0c4d43c403464b84ad26c66fb66ab27b544ac7a11b3 WatchSource:0}: Error finding container f2df5ad4e98d83d10122b0c4d43c403464b84ad26c66fb66ab27b544ac7a11b3: Status 404 returned error can't find the container with id f2df5ad4e98d83d10122b0c4d43c403464b84ad26c66fb66ab27b544ac7a11b3 Apr 17 10:20:59.564727 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:59.564680 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" event={"ID":"503b82d6-10b4-4a66-92d0-39f16a2ca8e6","Type":"ContainerStarted","Data":"fc8d24ea6e9bc3ffecc5632abfb2828b00f76ddc7e1a8e5b6b7727680131bf57"} Apr 17 10:20:59.567063 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:59.567010 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" event={"ID":"a218800f-9f30-4cdf-8abe-593bb2f2a0ff","Type":"ContainerStarted","Data":"f2df5ad4e98d83d10122b0c4d43c403464b84ad26c66fb66ab27b544ac7a11b3"} Apr 17 10:20:59.570742 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:20:59.570721 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" event={"ID":"d477de2b-c2e7-4b15-a142-770a4afaa97a","Type":"ContainerStarted","Data":"169b96b96eaaec9f5dbed56f9bcc62c7566fc2296263aa7db4e1a054dc86b9ba"} Apr 17 10:21:04.588424 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:04.588387 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" event={"ID":"503b82d6-10b4-4a66-92d0-39f16a2ca8e6","Type":"ContainerStarted","Data":"b6a0c8338cae222977c0971e5878fad05d5d8412bebbed0e2694565b4d5ab92e"} Apr 17 10:21:04.589919 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:04.589872 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" event={"ID":"a218800f-9f30-4cdf-8abe-593bb2f2a0ff","Type":"ContainerStarted","Data":"e16bb79c0057e1fde0b3b4fac4f1892096be89a975df5b51da3e3fc89d0cf777"} Apr 17 10:21:04.590119 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:04.590094 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:21:04.591335 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:04.591310 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" event={"ID":"d477de2b-c2e7-4b15-a142-770a4afaa97a","Type":"ContainerStarted","Data":"63553c9e6c7c986a1cc08b009063b133652004161dfc61475a2d9c046e51009e"} Apr 17 10:21:04.592125 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:04.592080 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" Apr 17 10:21:04.603253 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:04.603187 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-57b6c9f54f-7vstr" podStartSLOduration=1.591359706 podStartE2EDuration="6.603172702s" podCreationTimestamp="2026-04-17 10:20:58 +0000 UTC" firstStartedPulling="2026-04-17 10:20:58.656596435 +0000 UTC m=+175.315350476" lastFinishedPulling="2026-04-17 10:21:03.66840943 +0000 UTC m=+180.327163472" observedRunningTime="2026-04-17 10:21:04.602326843 +0000 UTC m=+181.261080905" watchObservedRunningTime="2026-04-17 10:21:04.603172702 +0000 UTC m=+181.261926765" Apr 17 10:21:04.620071 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:04.620026 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6fcd7f5db4-w8hkn" podStartSLOduration=1.763139365 podStartE2EDuration="6.62001256s" podCreationTimestamp="2026-04-17 10:20:58 +0000 UTC" firstStartedPulling="2026-04-17 10:20:58.826260627 +0000 UTC m=+175.485014671" lastFinishedPulling="2026-04-17 10:21:03.68313381 +0000 UTC m=+180.341887866" observedRunningTime="2026-04-17 10:21:04.618737152 +0000 UTC m=+181.277491214" watchObservedRunningTime="2026-04-17 10:21:04.62001256 +0000 UTC m=+181.278766624" Apr 17 10:21:06.598553 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:06.598509 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" event={"ID":"d477de2b-c2e7-4b15-a142-770a4afaa97a","Type":"ContainerStarted","Data":"f20b551cf9cc6d4e81983d13058d59b8892d4f84c6c0bd4701a35234e0634d40"} Apr 17 10:21:06.598553 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:06.598553 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" event={"ID":"d477de2b-c2e7-4b15-a142-770a4afaa97a","Type":"ContainerStarted","Data":"6e6d63c716fa58e257b87c8bf87984a501176d90b1fa9c80484cc4d231f2300f"} Apr 17 10:21:06.616524 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:06.616481 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-654fcd74fb-h584w" podStartSLOduration=1.572470074 podStartE2EDuration="8.616469737s" podCreationTimestamp="2026-04-17 10:20:58 +0000 UTC" firstStartedPulling="2026-04-17 10:20:58.730064128 +0000 UTC m=+175.388818169" lastFinishedPulling="2026-04-17 10:21:05.774063779 +0000 UTC m=+182.432817832" observedRunningTime="2026-04-17 10:21:06.614972257 +0000 UTC m=+183.273726330" watchObservedRunningTime="2026-04-17 10:21:06.616469737 +0000 UTC m=+183.275223798" Apr 17 10:21:28.191728 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.191696 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-4q2vz"] Apr 17 10:21:28.194661 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.194640 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" Apr 17 10:21:28.196854 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.196837 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Apr 17 10:21:28.197702 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.197686 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Apr 17 10:21:28.197759 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.197712 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-6ms2r\"" Apr 17 10:21:28.204268 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.204247 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-4q2vz"] Apr 17 10:21:28.235992 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.235963 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb9b8756-c214-4b5c-8c55-58bad7477877-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-4q2vz\" (UID: \"bb9b8756-c214-4b5c-8c55-58bad7477877\") " pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" Apr 17 10:21:28.236104 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.236015 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8tf\" (UniqueName: \"kubernetes.io/projected/bb9b8756-c214-4b5c-8c55-58bad7477877-kube-api-access-bv8tf\") pod \"cert-manager-webhook-597b96b99b-4q2vz\" (UID: \"bb9b8756-c214-4b5c-8c55-58bad7477877\") " pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" Apr 17 10:21:28.337048 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.337015 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bv8tf\" (UniqueName: \"kubernetes.io/projected/bb9b8756-c214-4b5c-8c55-58bad7477877-kube-api-access-bv8tf\") pod \"cert-manager-webhook-597b96b99b-4q2vz\" (UID: \"bb9b8756-c214-4b5c-8c55-58bad7477877\") " pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" Apr 17 10:21:28.337152 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.337062 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb9b8756-c214-4b5c-8c55-58bad7477877-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-4q2vz\" (UID: \"bb9b8756-c214-4b5c-8c55-58bad7477877\") " pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" Apr 17 10:21:28.345489 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.345469 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb9b8756-c214-4b5c-8c55-58bad7477877-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-4q2vz\" (UID: \"bb9b8756-c214-4b5c-8c55-58bad7477877\") " pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" Apr 17 10:21:28.345606 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.345590 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv8tf\" (UniqueName: \"kubernetes.io/projected/bb9b8756-c214-4b5c-8c55-58bad7477877-kube-api-access-bv8tf\") pod \"cert-manager-webhook-597b96b99b-4q2vz\" (UID: \"bb9b8756-c214-4b5c-8c55-58bad7477877\") " pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" Apr 17 10:21:28.503144 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.503072 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" Apr 17 10:21:28.616734 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.616701 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-4q2vz"] Apr 17 10:21:28.620905 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:21:28.620872 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb9b8756_c214_4b5c_8c55_58bad7477877.slice/crio-af6cfce351c1c67bed7251394944e4d6587ea9b195560bfa9ae92e5c1bc67e49 WatchSource:0}: Error finding container af6cfce351c1c67bed7251394944e4d6587ea9b195560bfa9ae92e5c1bc67e49: Status 404 returned error can't find the container with id af6cfce351c1c67bed7251394944e4d6587ea9b195560bfa9ae92e5c1bc67e49 Apr 17 10:21:28.656099 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:28.656068 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" event={"ID":"bb9b8756-c214-4b5c-8c55-58bad7477877","Type":"ContainerStarted","Data":"af6cfce351c1c67bed7251394944e4d6587ea9b195560bfa9ae92e5c1bc67e49"} Apr 17 10:21:32.667197 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:32.667166 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" event={"ID":"bb9b8756-c214-4b5c-8c55-58bad7477877","Type":"ContainerStarted","Data":"9873ceae2f8eb106c0101c34429cdf228c04e0b5af4891d67c030efecd6a66dd"} Apr 17 10:21:32.667577 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:32.667271 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" Apr 17 10:21:32.684123 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:32.684069 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" podStartSLOduration=1.284196015 podStartE2EDuration="4.68405683s" podCreationTimestamp="2026-04-17 10:21:28 +0000 UTC" firstStartedPulling="2026-04-17 10:21:28.62259103 +0000 UTC m=+205.281345070" lastFinishedPulling="2026-04-17 10:21:32.022451831 +0000 UTC m=+208.681205885" observedRunningTime="2026-04-17 10:21:32.683234894 +0000 UTC m=+209.341988957" watchObservedRunningTime="2026-04-17 10:21:32.68405683 +0000 UTC m=+209.342810888" Apr 17 10:21:38.672159 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:38.672127 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-4q2vz" Apr 17 10:21:40.347211 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.347175 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-bw5hc"] Apr 17 10:21:40.382062 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.382027 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-bw5hc"] Apr 17 10:21:40.382215 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.382138 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-bw5hc" Apr 17 10:21:40.384832 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.384813 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-gwxvv\"" Apr 17 10:21:40.522166 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.522134 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgqnq\" (UniqueName: \"kubernetes.io/projected/9088c9d2-3d89-4995-8130-40598929fef5-kube-api-access-sgqnq\") pod \"cert-manager-759f64656b-bw5hc\" (UID: \"9088c9d2-3d89-4995-8130-40598929fef5\") " pod="cert-manager/cert-manager-759f64656b-bw5hc" Apr 17 10:21:40.522166 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.522169 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9088c9d2-3d89-4995-8130-40598929fef5-bound-sa-token\") pod \"cert-manager-759f64656b-bw5hc\" (UID: \"9088c9d2-3d89-4995-8130-40598929fef5\") " pod="cert-manager/cert-manager-759f64656b-bw5hc" Apr 17 10:21:40.623045 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.622964 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sgqnq\" (UniqueName: \"kubernetes.io/projected/9088c9d2-3d89-4995-8130-40598929fef5-kube-api-access-sgqnq\") pod \"cert-manager-759f64656b-bw5hc\" (UID: \"9088c9d2-3d89-4995-8130-40598929fef5\") " pod="cert-manager/cert-manager-759f64656b-bw5hc" Apr 17 10:21:40.623045 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.623011 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9088c9d2-3d89-4995-8130-40598929fef5-bound-sa-token\") pod \"cert-manager-759f64656b-bw5hc\" (UID: \"9088c9d2-3d89-4995-8130-40598929fef5\") " pod="cert-manager/cert-manager-759f64656b-bw5hc" Apr 17 10:21:40.630766 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.630728 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9088c9d2-3d89-4995-8130-40598929fef5-bound-sa-token\") pod \"cert-manager-759f64656b-bw5hc\" (UID: \"9088c9d2-3d89-4995-8130-40598929fef5\") " pod="cert-manager/cert-manager-759f64656b-bw5hc" Apr 17 10:21:40.630881 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.630821 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgqnq\" (UniqueName: \"kubernetes.io/projected/9088c9d2-3d89-4995-8130-40598929fef5-kube-api-access-sgqnq\") pod \"cert-manager-759f64656b-bw5hc\" (UID: \"9088c9d2-3d89-4995-8130-40598929fef5\") " pod="cert-manager/cert-manager-759f64656b-bw5hc" Apr 17 10:21:40.691208 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.691184 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-bw5hc" Apr 17 10:21:40.804499 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:40.804443 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-bw5hc"] Apr 17 10:21:40.808808 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:21:40.808783 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9088c9d2_3d89_4995_8130_40598929fef5.slice/crio-2aa0c361e1fb2d2de486c403d1ca28f590de261d0b9fc8e8e71a61a216ad37f8 WatchSource:0}: Error finding container 2aa0c361e1fb2d2de486c403d1ca28f590de261d0b9fc8e8e71a61a216ad37f8: Status 404 returned error can't find the container with id 2aa0c361e1fb2d2de486c403d1ca28f590de261d0b9fc8e8e71a61a216ad37f8 Apr 17 10:21:41.691425 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:41.691391 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-bw5hc" event={"ID":"9088c9d2-3d89-4995-8130-40598929fef5","Type":"ContainerStarted","Data":"19468fbefb8dcc9e30a373801e6d734ae26260eaa873dc85fd445017e150f2b6"} Apr 17 10:21:41.691425 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:41.691425 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-bw5hc" event={"ID":"9088c9d2-3d89-4995-8130-40598929fef5","Type":"ContainerStarted","Data":"2aa0c361e1fb2d2de486c403d1ca28f590de261d0b9fc8e8e71a61a216ad37f8"} Apr 17 10:21:41.707114 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:21:41.707059 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-bw5hc" podStartSLOduration=1.707042036 podStartE2EDuration="1.707042036s" podCreationTimestamp="2026-04-17 10:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 10:21:41.705521712 +0000 UTC m=+218.364275776" watchObservedRunningTime="2026-04-17 10:21:41.707042036 +0000 UTC m=+218.365796100" Apr 17 10:23:03.844974 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:23:03.844930 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:23:03.846388 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:23:03.846345 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:23:03.847811 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:23:03.847789 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:23:03.848893 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:23:03.848871 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:23:03.850019 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:23:03.849997 2569 kubelet.go:1628] "Image garbage collection succeeded" Apr 17 10:28:03.862657 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:28:03.862623 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:28:03.863215 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:28:03.862963 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:28:03.864681 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:28:03.864657 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:28:03.865115 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:28:03.865086 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:29:21.631200 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:21.631166 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg"] Apr 17 10:29:21.636119 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:21.634766 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" Apr 17 10:29:21.637633 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:21.637612 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-c27s9\"/\"default-dockercfg-5ffjc\"" Apr 17 10:29:21.637867 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:21.637847 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-c27s9\"/\"openshift-service-ca.crt\"" Apr 17 10:29:21.638515 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:21.638502 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-c27s9\"/\"kube-root-ca.crt\"" Apr 17 10:29:21.648534 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:21.648476 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg"] Apr 17 10:29:21.681909 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:21.681888 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg245\" (UniqueName: \"kubernetes.io/projected/32eb4927-5f77-4599-b829-de6f9f57c2a4-kube-api-access-lg245\") pod \"test-trainjob-m6npq-node-0-0-ngvlg\" (UID: \"32eb4927-5f77-4599-b829-de6f9f57c2a4\") " pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" Apr 17 10:29:21.782560 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:21.782539 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lg245\" (UniqueName: \"kubernetes.io/projected/32eb4927-5f77-4599-b829-de6f9f57c2a4-kube-api-access-lg245\") pod \"test-trainjob-m6npq-node-0-0-ngvlg\" (UID: \"32eb4927-5f77-4599-b829-de6f9f57c2a4\") " pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" Apr 17 10:29:21.790817 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:21.790793 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg245\" (UniqueName: \"kubernetes.io/projected/32eb4927-5f77-4599-b829-de6f9f57c2a4-kube-api-access-lg245\") pod \"test-trainjob-m6npq-node-0-0-ngvlg\" (UID: \"32eb4927-5f77-4599-b829-de6f9f57c2a4\") " pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" Apr 17 10:29:21.944032 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:21.943983 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" Apr 17 10:29:22.058848 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:22.058816 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg"] Apr 17 10:29:22.061573 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:29:22.061547 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32eb4927_5f77_4599_b829_de6f9f57c2a4.slice/crio-d3929b46a7cf4eaa16175d1ca10a9a14ecdc1feaf5d9d6c5bcd7a364c1725d00 WatchSource:0}: Error finding container d3929b46a7cf4eaa16175d1ca10a9a14ecdc1feaf5d9d6c5bcd7a364c1725d00: Status 404 returned error can't find the container with id d3929b46a7cf4eaa16175d1ca10a9a14ecdc1feaf5d9d6c5bcd7a364c1725d00 Apr 17 10:29:22.063266 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:22.063252 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 10:29:22.879148 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:29:22.879091 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" event={"ID":"32eb4927-5f77-4599-b829-de6f9f57c2a4","Type":"ContainerStarted","Data":"d3929b46a7cf4eaa16175d1ca10a9a14ecdc1feaf5d9d6c5bcd7a364c1725d00"} Apr 17 10:33:51.114491 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:51.114460 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:33:51.115029 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:51.114460 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:33:51.139436 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:51.139411 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:33:51.139436 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:51.139420 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:33:52.647670 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:52.647587 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" event={"ID":"32eb4927-5f77-4599-b829-de6f9f57c2a4","Type":"ContainerStarted","Data":"1bcc7aa7947cf60e14459ba33df26067ea4a2c3bb7e6ea7582f614085c6f2027"} Apr 17 10:33:52.674382 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:52.671268 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" podStartSLOduration=1.445042653 podStartE2EDuration="4m31.671251549s" podCreationTimestamp="2026-04-17 10:29:21 +0000 UTC" firstStartedPulling="2026-04-17 10:29:22.06340143 +0000 UTC m=+678.722155470" lastFinishedPulling="2026-04-17 10:33:52.289610325 +0000 UTC m=+948.948364366" observedRunningTime="2026-04-17 10:33:52.66783854 +0000 UTC m=+949.326592602" watchObservedRunningTime="2026-04-17 10:33:52.671251549 +0000 UTC m=+949.330005612" Apr 17 10:33:58.664284 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:58.664248 2569 generic.go:358] "Generic (PLEG): container finished" podID="32eb4927-5f77-4599-b829-de6f9f57c2a4" containerID="1bcc7aa7947cf60e14459ba33df26067ea4a2c3bb7e6ea7582f614085c6f2027" exitCode=0 Apr 17 10:33:58.664709 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:58.664325 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" event={"ID":"32eb4927-5f77-4599-b829-de6f9f57c2a4","Type":"ContainerDied","Data":"1bcc7aa7947cf60e14459ba33df26067ea4a2c3bb7e6ea7582f614085c6f2027"} Apr 17 10:33:59.812018 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:59.812000 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" Apr 17 10:33:59.970064 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:59.970007 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg245\" (UniqueName: \"kubernetes.io/projected/32eb4927-5f77-4599-b829-de6f9f57c2a4-kube-api-access-lg245\") pod \"32eb4927-5f77-4599-b829-de6f9f57c2a4\" (UID: \"32eb4927-5f77-4599-b829-de6f9f57c2a4\") " Apr 17 10:33:59.972077 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:33:59.972055 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32eb4927-5f77-4599-b829-de6f9f57c2a4-kube-api-access-lg245" (OuterVolumeSpecName: "kube-api-access-lg245") pod "32eb4927-5f77-4599-b829-de6f9f57c2a4" (UID: "32eb4927-5f77-4599-b829-de6f9f57c2a4"). InnerVolumeSpecName "kube-api-access-lg245". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 10:34:00.070624 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:34:00.070594 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lg245\" (UniqueName: \"kubernetes.io/projected/32eb4927-5f77-4599-b829-de6f9f57c2a4-kube-api-access-lg245\") on node \"ip-10-0-136-48.ec2.internal\" DevicePath \"\"" Apr 17 10:34:00.671298 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:34:00.671266 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" Apr 17 10:34:00.671298 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:34:00.671276 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg" event={"ID":"32eb4927-5f77-4599-b829-de6f9f57c2a4","Type":"ContainerDied","Data":"d3929b46a7cf4eaa16175d1ca10a9a14ecdc1feaf5d9d6c5bcd7a364c1725d00"} Apr 17 10:34:00.671548 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:34:00.671311 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3929b46a7cf4eaa16175d1ca10a9a14ecdc1feaf5d9d6c5bcd7a364c1725d00" Apr 17 10:38:51.154169 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:38:51.154090 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:38:51.154721 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:38:51.154485 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:38:51.156150 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:38:51.156126 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:38:51.156414 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:38:51.156328 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:39:44.701105 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.701073 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g"] Apr 17 10:39:44.704248 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.701288 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32eb4927-5f77-4599-b829-de6f9f57c2a4" containerName="node" Apr 17 10:39:44.704248 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.701297 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="32eb4927-5f77-4599-b829-de6f9f57c2a4" containerName="node" Apr 17 10:39:44.704248 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.701375 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="32eb4927-5f77-4599-b829-de6f9f57c2a4" containerName="node" Apr 17 10:39:44.705075 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.705058 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" Apr 17 10:39:44.707178 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.707157 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-snpv9\"/\"kube-root-ca.crt\"" Apr 17 10:39:44.707267 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.707157 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-snpv9\"/\"default-dockercfg-57k44\"" Apr 17 10:39:44.707884 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.707867 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-snpv9\"/\"openshift-service-ca.crt\"" Apr 17 10:39:44.710852 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.710831 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g"] Apr 17 10:39:44.881497 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.881468 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpwbc\" (UniqueName: \"kubernetes.io/projected/2649bd7e-81eb-4aa3-8f63-4b0cdbad4876-kube-api-access-mpwbc\") pod \"test-trainjob-jhvjt-node-0-0-9c47g\" (UID: \"2649bd7e-81eb-4aa3-8f63-4b0cdbad4876\") " pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" Apr 17 10:39:44.981800 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.981730 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mpwbc\" (UniqueName: \"kubernetes.io/projected/2649bd7e-81eb-4aa3-8f63-4b0cdbad4876-kube-api-access-mpwbc\") pod \"test-trainjob-jhvjt-node-0-0-9c47g\" (UID: \"2649bd7e-81eb-4aa3-8f63-4b0cdbad4876\") " pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" Apr 17 10:39:44.989936 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:44.989911 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpwbc\" (UniqueName: \"kubernetes.io/projected/2649bd7e-81eb-4aa3-8f63-4b0cdbad4876-kube-api-access-mpwbc\") pod \"test-trainjob-jhvjt-node-0-0-9c47g\" (UID: \"2649bd7e-81eb-4aa3-8f63-4b0cdbad4876\") " pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" Apr 17 10:39:45.014634 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:45.014610 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" Apr 17 10:39:45.131702 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:45.131549 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g"] Apr 17 10:39:45.134430 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:39:45.134399 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2649bd7e_81eb_4aa3_8f63_4b0cdbad4876.slice/crio-683d38e45958a6ef5dd9b55379c290268c150339b95f33faf6fb7a42f8819117 WatchSource:0}: Error finding container 683d38e45958a6ef5dd9b55379c290268c150339b95f33faf6fb7a42f8819117: Status 404 returned error can't find the container with id 683d38e45958a6ef5dd9b55379c290268c150339b95f33faf6fb7a42f8819117 Apr 17 10:39:45.136321 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:45.136301 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 10:39:45.571140 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:39:45.571104 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" event={"ID":"2649bd7e-81eb-4aa3-8f63-4b0cdbad4876","Type":"ContainerStarted","Data":"683d38e45958a6ef5dd9b55379c290268c150339b95f33faf6fb7a42f8819117"} Apr 17 10:43:51.170004 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:43:51.169976 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:43:51.171803 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:43:51.171775 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:43:51.172077 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:43:51.172060 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:43:51.173668 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:43:51.173648 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:46:27.691031 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:27.690999 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" event={"ID":"2649bd7e-81eb-4aa3-8f63-4b0cdbad4876","Type":"ContainerStarted","Data":"23f0f4106d2b791e25cb33dc4747336acef0e137c3a7d61e9d25cd1433ff3a47"} Apr 17 10:46:27.693033 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:27.693016 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-snpv9\"/\"default-dockercfg-57k44\"" Apr 17 10:46:27.714026 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:27.713981 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" podStartSLOduration=1.944560488 podStartE2EDuration="6m43.713965164s" podCreationTimestamp="2026-04-17 10:39:44 +0000 UTC" firstStartedPulling="2026-04-17 10:39:45.136445919 +0000 UTC m=+1301.795199959" lastFinishedPulling="2026-04-17 10:46:26.905850591 +0000 UTC m=+1703.564604635" observedRunningTime="2026-04-17 10:46:27.712554296 +0000 UTC m=+1704.371308358" watchObservedRunningTime="2026-04-17 10:46:27.713965164 +0000 UTC m=+1704.372719228" Apr 17 10:46:27.837752 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:27.837722 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-snpv9\"/\"kube-root-ca.crt\"" Apr 17 10:46:27.848300 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:27.848275 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-snpv9\"/\"openshift-service-ca.crt\"" Apr 17 10:46:30.701039 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:30.700959 2569 generic.go:358] "Generic (PLEG): container finished" podID="2649bd7e-81eb-4aa3-8f63-4b0cdbad4876" containerID="23f0f4106d2b791e25cb33dc4747336acef0e137c3a7d61e9d25cd1433ff3a47" exitCode=0 Apr 17 10:46:30.701407 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:30.701037 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" event={"ID":"2649bd7e-81eb-4aa3-8f63-4b0cdbad4876","Type":"ContainerDied","Data":"23f0f4106d2b791e25cb33dc4747336acef0e137c3a7d61e9d25cd1433ff3a47"} Apr 17 10:46:31.868447 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:31.868424 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" Apr 17 10:46:31.966384 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:31.966291 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpwbc\" (UniqueName: \"kubernetes.io/projected/2649bd7e-81eb-4aa3-8f63-4b0cdbad4876-kube-api-access-mpwbc\") pod \"2649bd7e-81eb-4aa3-8f63-4b0cdbad4876\" (UID: \"2649bd7e-81eb-4aa3-8f63-4b0cdbad4876\") " Apr 17 10:46:31.968497 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:31.968470 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2649bd7e-81eb-4aa3-8f63-4b0cdbad4876-kube-api-access-mpwbc" (OuterVolumeSpecName: "kube-api-access-mpwbc") pod "2649bd7e-81eb-4aa3-8f63-4b0cdbad4876" (UID: "2649bd7e-81eb-4aa3-8f63-4b0cdbad4876"). InnerVolumeSpecName "kube-api-access-mpwbc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 10:46:32.067688 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:32.067660 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mpwbc\" (UniqueName: \"kubernetes.io/projected/2649bd7e-81eb-4aa3-8f63-4b0cdbad4876-kube-api-access-mpwbc\") on node \"ip-10-0-136-48.ec2.internal\" DevicePath \"\"" Apr 17 10:46:32.707955 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:32.707926 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" Apr 17 10:46:32.707955 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:32.707941 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g" event={"ID":"2649bd7e-81eb-4aa3-8f63-4b0cdbad4876","Type":"ContainerDied","Data":"683d38e45958a6ef5dd9b55379c290268c150339b95f33faf6fb7a42f8819117"} Apr 17 10:46:32.708307 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:46:32.707974 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="683d38e45958a6ef5dd9b55379c290268c150339b95f33faf6fb7a42f8819117" Apr 17 10:48:51.186540 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:48:51.186510 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:48:51.188322 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:48:51.188298 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:48:51.188659 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:48:51.188636 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:48:51.190485 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:48:51.190467 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:53:51.202590 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:53:51.202514 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:53:51.205119 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:53:51.204695 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:53:51.206232 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:53:51.206212 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:53:51.208210 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:53:51.208193 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:56:35.700277 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:56:35.700249 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/test-ns-snpv9_test-trainjob-jhvjt-node-0-0-9c47g_2649bd7e-81eb-4aa3-8f63-4b0cdbad4876/node/0.log" Apr 17 10:56:36.382621 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:56:36.382590 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/test-ns-c27s9_test-trainjob-m6npq-node-0-0-ngvlg_32eb4927-5f77-4599-b829-de6f9f57c2a4/node/0.log" Apr 17 10:56:40.730982 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:56:40.730950 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g"] Apr 17 10:56:40.734512 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:56:40.734489 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["test-ns-snpv9/test-trainjob-jhvjt-node-0-0-9c47g"] Apr 17 10:56:41.605729 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:56:41.605699 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg"] Apr 17 10:56:41.613130 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:56:41.613099 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["test-ns-c27s9/test-trainjob-m6npq-node-0-0-ngvlg"] Apr 17 10:56:41.969211 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:56:41.969128 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2649bd7e-81eb-4aa3-8f63-4b0cdbad4876" path="/var/lib/kubelet/pods/2649bd7e-81eb-4aa3-8f63-4b0cdbad4876/volumes" Apr 17 10:56:41.969593 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:56:41.969475 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32eb4927-5f77-4599-b829-de6f9f57c2a4" path="/var/lib/kubelet/pods/32eb4927-5f77-4599-b829-de6f9f57c2a4/volumes" Apr 17 10:57:03.983352 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:57:03.983278 2569 scope.go:117] "RemoveContainer" containerID="23f0f4106d2b791e25cb33dc4747336acef0e137c3a7d61e9d25cd1433ff3a47" Apr 17 10:57:03.990807 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:57:03.990786 2569 scope.go:117] "RemoveContainer" containerID="1bcc7aa7947cf60e14459ba33df26067ea4a2c3bb7e6ea7582f614085c6f2027" Apr 17 10:58:51.220084 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:58:51.219989 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:58:51.224117 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:58:51.221796 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:58:51.224117 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:58:51.223186 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 10:58:51.224960 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:58:51.224945 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-136-48.ec2.internal_89fe943de065f151bde50a0b04d91a20/kube-rbac-proxy-crio/1.log" Apr 17 10:59:45.998256 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:45.998171 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-62hp5/must-gather-rzjkq"] Apr 17 10:59:45.998742 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:45.998419 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2649bd7e-81eb-4aa3-8f63-4b0cdbad4876" containerName="node" Apr 17 10:59:45.998742 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:45.998429 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="2649bd7e-81eb-4aa3-8f63-4b0cdbad4876" containerName="node" Apr 17 10:59:45.998742 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:45.998471 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="2649bd7e-81eb-4aa3-8f63-4b0cdbad4876" containerName="node" Apr 17 10:59:46.001251 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.001233 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-62hp5/must-gather-rzjkq" Apr 17 10:59:46.003578 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.003551 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-62hp5\"/\"default-dockercfg-tk6ql\"" Apr 17 10:59:46.003715 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.003581 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-62hp5\"/\"openshift-service-ca.crt\"" Apr 17 10:59:46.004209 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.004195 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-62hp5\"/\"kube-root-ca.crt\"" Apr 17 10:59:46.006778 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.006755 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-62hp5/must-gather-rzjkq"] Apr 17 10:59:46.028122 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.028098 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9zc\" (UniqueName: \"kubernetes.io/projected/6fa945aa-bd12-4c8f-913a-d3199b1999f8-kube-api-access-jw9zc\") pod \"must-gather-rzjkq\" (UID: \"6fa945aa-bd12-4c8f-913a-d3199b1999f8\") " pod="openshift-must-gather-62hp5/must-gather-rzjkq" Apr 17 10:59:46.028249 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.028134 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6fa945aa-bd12-4c8f-913a-d3199b1999f8-must-gather-output\") pod \"must-gather-rzjkq\" (UID: \"6fa945aa-bd12-4c8f-913a-d3199b1999f8\") " pod="openshift-must-gather-62hp5/must-gather-rzjkq" Apr 17 10:59:46.129511 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.129484 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9zc\" (UniqueName: \"kubernetes.io/projected/6fa945aa-bd12-4c8f-913a-d3199b1999f8-kube-api-access-jw9zc\") pod \"must-gather-rzjkq\" (UID: \"6fa945aa-bd12-4c8f-913a-d3199b1999f8\") " pod="openshift-must-gather-62hp5/must-gather-rzjkq" Apr 17 10:59:46.129631 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.129519 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6fa945aa-bd12-4c8f-913a-d3199b1999f8-must-gather-output\") pod \"must-gather-rzjkq\" (UID: \"6fa945aa-bd12-4c8f-913a-d3199b1999f8\") " pod="openshift-must-gather-62hp5/must-gather-rzjkq" Apr 17 10:59:46.129835 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.129819 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6fa945aa-bd12-4c8f-913a-d3199b1999f8-must-gather-output\") pod \"must-gather-rzjkq\" (UID: \"6fa945aa-bd12-4c8f-913a-d3199b1999f8\") " pod="openshift-must-gather-62hp5/must-gather-rzjkq" Apr 17 10:59:46.136424 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.136403 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9zc\" (UniqueName: \"kubernetes.io/projected/6fa945aa-bd12-4c8f-913a-d3199b1999f8-kube-api-access-jw9zc\") pod \"must-gather-rzjkq\" (UID: \"6fa945aa-bd12-4c8f-913a-d3199b1999f8\") " pod="openshift-must-gather-62hp5/must-gather-rzjkq" Apr 17 10:59:46.310655 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.310634 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-62hp5/must-gather-rzjkq" Apr 17 10:59:46.428531 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.428505 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-62hp5/must-gather-rzjkq"] Apr 17 10:59:46.429747 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:59:46.429722 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fa945aa_bd12_4c8f_913a_d3199b1999f8.slice/crio-ad9877ffc108d50b3c6d7987c9adc652455009956618f964e4ba635d442f2954 WatchSource:0}: Error finding container ad9877ffc108d50b3c6d7987c9adc652455009956618f964e4ba635d442f2954: Status 404 returned error can't find the container with id ad9877ffc108d50b3c6d7987c9adc652455009956618f964e4ba635d442f2954 Apr 17 10:59:46.431190 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.431174 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 10:59:46.782487 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:46.782457 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-62hp5/must-gather-rzjkq" event={"ID":"6fa945aa-bd12-4c8f-913a-d3199b1999f8","Type":"ContainerStarted","Data":"ad9877ffc108d50b3c6d7987c9adc652455009956618f964e4ba635d442f2954"} Apr 17 10:59:47.788328 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:47.787591 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-62hp5/must-gather-rzjkq" event={"ID":"6fa945aa-bd12-4c8f-913a-d3199b1999f8","Type":"ContainerStarted","Data":"4bb6f217b20e18502fb15971c4351bb736ce0eba88bda05a757a73b706e20885"} Apr 17 10:59:47.788328 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:47.787634 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-62hp5/must-gather-rzjkq" event={"ID":"6fa945aa-bd12-4c8f-913a-d3199b1999f8","Type":"ContainerStarted","Data":"43a99e98a7d6287fb6ee460eee2940f8afa249ee026301a590e68bd30ddebc88"} Apr 17 10:59:47.802414 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:47.802281 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-62hp5/must-gather-rzjkq" podStartSLOduration=2.016541402 podStartE2EDuration="2.802261866s" podCreationTimestamp="2026-04-17 10:59:45 +0000 UTC" firstStartedPulling="2026-04-17 10:59:46.431293272 +0000 UTC m=+2503.090047312" lastFinishedPulling="2026-04-17 10:59:47.217013733 +0000 UTC m=+2503.875767776" observedRunningTime="2026-04-17 10:59:47.800737547 +0000 UTC m=+2504.459491627" watchObservedRunningTime="2026-04-17 10:59:47.802261866 +0000 UTC m=+2504.461015932" Apr 17 10:59:48.653342 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:48.653313 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-lmj77_86b2b97b-7175-48cb-822e-123ce9badea3/global-pull-secret-syncer/0.log" Apr 17 10:59:48.758620 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:48.758587 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-zm8zp_b43f5610-f9dd-49c2-9de2-5c1cca09f0d6/konnectivity-agent/0.log" Apr 17 10:59:48.798539 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:48.798505 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-136-48.ec2.internal_b5ac43e20b2029d4a2be3d7bfa5c6771/haproxy/0.log" Apr 17 10:59:52.213758 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:52.213728 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-mj2c9_8f424dee-1a61-4c87-8a31-3c6ab909fcc4/node-exporter/0.log" Apr 17 10:59:52.236808 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:52.236754 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-mj2c9_8f424dee-1a61-4c87-8a31-3c6ab909fcc4/kube-rbac-proxy/0.log" Apr 17 10:59:52.255777 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:52.255753 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-mj2c9_8f424dee-1a61-4c87-8a31-3c6ab909fcc4/init-textfile/0.log" Apr 17 10:59:55.433350 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.433313 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb"] Apr 17 10:59:55.437454 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.437431 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.442517 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.442486 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb"] Apr 17 10:59:55.498259 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.498217 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-sys\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.498259 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.498259 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-proc\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.498528 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.498285 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-podres\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.498528 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.498410 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-lib-modules\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.498528 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.498460 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zjk\" (UniqueName: \"kubernetes.io/projected/061044d3-820c-497a-b6a7-70343bc0b6f2-kube-api-access-67zjk\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.584607 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.584576 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-mm9sd_4d2301e1-52d7-4d39-9acf-d767734726a3/dns/0.log" Apr 17 10:59:55.599508 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.599472 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-lib-modules\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.599654 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.599530 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-67zjk\" (UniqueName: \"kubernetes.io/projected/061044d3-820c-497a-b6a7-70343bc0b6f2-kube-api-access-67zjk\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.599654 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.599568 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-sys\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.599654 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.599595 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-proc\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.599775 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.599657 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-proc\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.599775 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.599662 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-sys\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.599775 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.599656 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-lib-modules\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.599775 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.599686 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-podres\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.599898 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.599799 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/061044d3-820c-497a-b6a7-70343bc0b6f2-podres\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.605549 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.605527 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-mm9sd_4d2301e1-52d7-4d39-9acf-d767734726a3/kube-rbac-proxy/0.log" Apr 17 10:59:55.607244 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.607227 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-67zjk\" (UniqueName: \"kubernetes.io/projected/061044d3-820c-497a-b6a7-70343bc0b6f2-kube-api-access-67zjk\") pod \"perf-node-gather-daemonset-hh7mb\" (UID: \"061044d3-820c-497a-b6a7-70343bc0b6f2\") " pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.707099 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.707024 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-ksqkq_90ac1d6e-66e2-4de9-8433-b5d2a8895e80/dns-node-resolver/0.log" Apr 17 10:59:55.751617 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.751589 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:55.879306 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:55.879274 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb"] Apr 17 10:59:55.883332 ip-10-0-136-48 kubenswrapper[2569]: W0417 10:59:55.883283 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod061044d3_820c_497a_b6a7_70343bc0b6f2.slice/crio-e1817c3fce7f6bcf568905fdf4fe92f1867315dbe8bbf2ee06895cc568c01268 WatchSource:0}: Error finding container e1817c3fce7f6bcf568905fdf4fe92f1867315dbe8bbf2ee06895cc568c01268: Status 404 returned error can't find the container with id e1817c3fce7f6bcf568905fdf4fe92f1867315dbe8bbf2ee06895cc568c01268 Apr 17 10:59:56.106019 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:56.105990 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-22g8b_083d1f1c-be08-410d-a728-2affe73763a9/node-ca/0.log" Apr 17 10:59:56.818420 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:56.818389 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" event={"ID":"061044d3-820c-497a-b6a7-70343bc0b6f2","Type":"ContainerStarted","Data":"cd8051a5f831505f1ce4d0aaeacc5ef4102f0d981b15b725234f05aa2a8a76de"} Apr 17 10:59:56.818874 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:56.818427 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" event={"ID":"061044d3-820c-497a-b6a7-70343bc0b6f2","Type":"ContainerStarted","Data":"e1817c3fce7f6bcf568905fdf4fe92f1867315dbe8bbf2ee06895cc568c01268"} Apr 17 10:59:56.818874 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:56.818512 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 10:59:56.833881 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:56.833829 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" podStartSLOduration=1.833817404 podStartE2EDuration="1.833817404s" podCreationTimestamp="2026-04-17 10:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 10:59:56.832758157 +0000 UTC m=+2513.491512219" watchObservedRunningTime="2026-04-17 10:59:56.833817404 +0000 UTC m=+2513.492571466" Apr 17 10:59:57.035905 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:57.035878 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-cnksf_42a98ea5-8626-48d5-bb6b-80eb251d2e33/serve-healthcheck-canary/0.log" Apr 17 10:59:57.515912 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:57.515881 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-mj7qr_2e326b9d-2472-46f0-9332-42095d7aac7f/kube-rbac-proxy/0.log" Apr 17 10:59:57.534619 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:57.534595 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-mj7qr_2e326b9d-2472-46f0-9332-42095d7aac7f/exporter/0.log" Apr 17 10:59:57.553342 ip-10-0-136-48 kubenswrapper[2569]: I0417 10:59:57.553319 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-mj7qr_2e326b9d-2472-46f0-9332-42095d7aac7f/extractor/0.log" Apr 17 11:00:02.831266 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:02.831240 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-62hp5/perf-node-gather-daemonset-hh7mb" Apr 17 11:00:02.879249 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:02.879223 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2t59m_4cac3107-7535-4daf-bf6b-d5bf95844303/kube-multus-additional-cni-plugins/0.log" Apr 17 11:00:02.897263 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:02.897225 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2t59m_4cac3107-7535-4daf-bf6b-d5bf95844303/egress-router-binary-copy/0.log" Apr 17 11:00:02.913667 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:02.913647 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2t59m_4cac3107-7535-4daf-bf6b-d5bf95844303/cni-plugins/0.log" Apr 17 11:00:02.930758 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:02.930735 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2t59m_4cac3107-7535-4daf-bf6b-d5bf95844303/bond-cni-plugin/0.log" Apr 17 11:00:02.947557 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:02.947536 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2t59m_4cac3107-7535-4daf-bf6b-d5bf95844303/routeoverride-cni/0.log" Apr 17 11:00:02.965843 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:02.965819 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2t59m_4cac3107-7535-4daf-bf6b-d5bf95844303/whereabouts-cni-bincopy/0.log" Apr 17 11:00:02.990416 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:02.990393 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2t59m_4cac3107-7535-4daf-bf6b-d5bf95844303/whereabouts-cni/0.log" Apr 17 11:00:03.307584 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:03.307556 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kwrj6_db889039-4b7b-4564-b656-afd928d6bcbd/kube-multus/0.log" Apr 17 11:00:03.429470 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:03.429443 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-z6grc_56255f22-7072-487b-8723-978c296878fb/network-metrics-daemon/0.log" Apr 17 11:00:03.446441 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:03.446398 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-z6grc_56255f22-7072-487b-8723-978c296878fb/kube-rbac-proxy/0.log" Apr 17 11:00:04.711577 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:04.711541 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-controller/0.log" Apr 17 11:00:04.728447 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:04.728379 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/0.log" Apr 17 11:00:04.754401 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:04.754338 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovn-acl-logging/1.log" Apr 17 11:00:04.773149 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:04.773067 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/kube-rbac-proxy-node/0.log" Apr 17 11:00:04.793747 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:04.793721 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/kube-rbac-proxy-ovn-metrics/0.log" Apr 17 11:00:04.816241 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:04.816210 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/northd/0.log" Apr 17 11:00:04.836720 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:04.836695 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/nbdb/0.log" Apr 17 11:00:04.858800 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:04.858776 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/sbdb/0.log" Apr 17 11:00:05.034856 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:05.034816 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qshzm_2b7b4d04-9c9a-4878-81fa-d9c6f965d3a5/ovnkube-controller/0.log" Apr 17 11:00:05.871348 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:05.871317 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-gftlr_1abeaef1-047c-4fff-a659-456e05294f94/network-check-target-container/0.log" Apr 17 11:00:06.795753 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:06.795727 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-vtt5p_aaa97102-f10d-49b4-83af-c47d0b2cd496/iptables-alerter/0.log" Apr 17 11:00:07.328822 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:07.328778 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-hlkpd_83c3ab0b-7fc8-4489-9ad8-7ac887fbde2e/tuned/0.log" Apr 17 11:00:10.364503 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:10.364471 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-8tvl6_17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f/csi-driver/0.log" Apr 17 11:00:10.380950 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:10.380922 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-8tvl6_17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f/csi-node-driver-registrar/0.log" Apr 17 11:00:10.397404 ip-10-0-136-48 kubenswrapper[2569]: I0417 11:00:10.397379 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-8tvl6_17dd506b-8cdc-44d7-9c7d-6ae2a2084b0f/csi-liveness-probe/0.log"