Mar 12 13:35:58.338743 ip-10-0-142-111 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Mar 12 13:35:58.338756 ip-10-0-142-111 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Mar 12 13:35:58.338764 ip-10-0-142-111 systemd[1]: kubelet.service: Failed with result 'resources'. Mar 12 13:35:58.339028 ip-10-0-142-111 systemd[1]: Failed to start Kubernetes Kubelet. Mar 12 13:36:08.398886 ip-10-0-142-111 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Mar 12 13:36:08.398900 ip-10-0-142-111 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot c0b08f0805ee417ca51fa61d00281d8c -- Mar 12 13:38:39.089999 ip-10-0-142-111 systemd[1]: Starting Kubernetes Kubelet... Mar 12 13:38:39.584757 ip-10-0-142-111 kubenswrapper[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 13:38:39.584757 ip-10-0-142-111 kubenswrapper[2570]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 13:38:39.584757 ip-10-0-142-111 kubenswrapper[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 13:38:39.584757 ip-10-0-142-111 kubenswrapper[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 13:38:39.584757 ip-10-0-142-111 kubenswrapper[2570]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 13:38:39.586510 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.586410 2570 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 13:38:39.593861 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593831 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Mar 12 13:38:39.593861 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593857 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Mar 12 13:38:39.593861 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593862 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Mar 12 13:38:39.593861 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593866 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Mar 12 13:38:39.593861 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593869 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593872 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593876 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593879 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593882 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593884 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593887 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593890 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593892 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593895 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593898 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593907 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593911 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593913 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593916 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593918 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593921 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593923 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593926 2570 feature_gate.go:328] unrecognized feature gate: Example Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593929 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Mar 12 13:38:39.594080 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593931 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593934 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593936 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593939 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593941 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593944 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593946 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593949 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593951 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593954 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593956 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593959 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593961 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593964 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593967 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593970 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593974 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593976 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593978 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Mar 12 13:38:39.594605 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593981 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593985 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593987 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593990 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593993 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.593996 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594000 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594003 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594005 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594008 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594010 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594013 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594016 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594018 2570 feature_gate.go:328] unrecognized feature gate: Example2 Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594023 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594026 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594029 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594032 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594035 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 13:38:39.595146 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594038 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594041 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594044 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594046 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594049 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594051 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594054 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594057 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594070 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594075 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594079 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594081 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594087 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594089 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594093 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594096 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594099 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594102 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594104 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594107 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Mar 12 13:38:39.595657 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594110 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594112 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594115 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594117 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594580 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594588 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594592 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594596 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594598 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594601 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594604 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594606 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594609 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594612 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594615 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594636 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594642 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594646 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594650 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594655 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 13:38:39.596184 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594661 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594666 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594670 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594674 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594677 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594680 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594683 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594686 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594689 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594691 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594694 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594697 2570 feature_gate.go:328] unrecognized feature gate: Example2 Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594699 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594702 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594705 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594707 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594710 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594713 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594715 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594718 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Mar 12 13:38:39.596713 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594720 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594723 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594725 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594729 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594731 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594737 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594740 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594743 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594746 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594749 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594752 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594755 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594758 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594761 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594764 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594767 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594769 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594772 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594774 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Mar 12 13:38:39.597294 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594778 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594780 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594783 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594785 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594788 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594790 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594793 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594796 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594799 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594801 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594803 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594806 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594809 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594812 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594815 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594818 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594821 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594824 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594827 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594829 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Mar 12 13:38:39.597833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594832 2570 feature_gate.go:328] unrecognized feature gate: Example Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594835 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594838 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594841 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594843 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594846 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594848 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594851 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594854 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594856 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.594859 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.594950 2570 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.594966 2570 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.594978 2570 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.594982 2570 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.594987 2570 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.594991 2570 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.594996 2570 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595001 2570 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595005 2570 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595008 2570 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 13:38:39.598399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595012 2570 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595016 2570 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595019 2570 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595022 2570 flags.go:64] FLAG: --cgroup-root="" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595025 2570 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595029 2570 flags.go:64] FLAG: --client-ca-file="" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595031 2570 flags.go:64] FLAG: --cloud-config="" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595035 2570 flags.go:64] FLAG: --cloud-provider="external" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595038 2570 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595042 2570 flags.go:64] FLAG: --cluster-domain="" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595045 2570 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595049 2570 flags.go:64] FLAG: --config-dir="" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595052 2570 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595056 2570 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595060 2570 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595063 2570 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595067 2570 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595070 2570 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595073 2570 flags.go:64] FLAG: --contention-profiling="false" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595077 2570 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595080 2570 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595083 2570 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595086 2570 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595091 2570 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595094 2570 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 13:38:39.598971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595097 2570 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595100 2570 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595103 2570 flags.go:64] FLAG: --enable-server="true" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595106 2570 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595111 2570 flags.go:64] FLAG: --event-burst="100" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595114 2570 flags.go:64] FLAG: --event-qps="50" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595118 2570 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595121 2570 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595125 2570 flags.go:64] FLAG: --eviction-hard="" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595129 2570 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595132 2570 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595135 2570 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595138 2570 flags.go:64] FLAG: --eviction-soft="" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595141 2570 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595144 2570 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595147 2570 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595151 2570 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595154 2570 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595157 2570 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595160 2570 flags.go:64] FLAG: --feature-gates="" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595164 2570 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595167 2570 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595171 2570 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595174 2570 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595177 2570 flags.go:64] FLAG: --healthz-port="10248" Mar 12 13:38:39.599656 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595180 2570 flags.go:64] FLAG: --help="false" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595183 2570 flags.go:64] FLAG: --hostname-override="ip-10-0-142-111.ec2.internal" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595187 2570 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595190 2570 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595193 2570 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595196 2570 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595199 2570 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595202 2570 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595205 2570 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595208 2570 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595211 2570 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595214 2570 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595217 2570 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595220 2570 flags.go:64] FLAG: --kube-reserved="" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595223 2570 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595226 2570 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595229 2570 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595232 2570 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595235 2570 flags.go:64] FLAG: --lock-file="" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595238 2570 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595241 2570 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595245 2570 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595251 2570 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 13:38:39.600322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595254 2570 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595256 2570 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595259 2570 flags.go:64] FLAG: --logging-format="text" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595262 2570 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595266 2570 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595268 2570 flags.go:64] FLAG: --manifest-url="" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595271 2570 flags.go:64] FLAG: --manifest-url-header="" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595276 2570 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595279 2570 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595283 2570 flags.go:64] FLAG: --max-pods="110" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595286 2570 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595289 2570 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595292 2570 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595295 2570 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595305 2570 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595308 2570 flags.go:64] FLAG: --node-ip="0.0.0.0" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595311 2570 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595320 2570 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595324 2570 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595327 2570 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595330 2570 flags.go:64] FLAG: --pod-cidr="" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595333 2570 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3115b2610585407ab0742648cfbe39c72f57482889f0e778f5ac6fdc482217b" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595340 2570 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595343 2570 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 13:38:39.600965 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595346 2570 flags.go:64] FLAG: --pods-per-core="0" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595349 2570 flags.go:64] FLAG: --port="10250" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595352 2570 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595355 2570 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0cde1406a8c443831" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595358 2570 flags.go:64] FLAG: --qos-reserved="" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595361 2570 flags.go:64] FLAG: --read-only-port="10255" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595364 2570 flags.go:64] FLAG: --register-node="true" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595367 2570 flags.go:64] FLAG: --register-schedulable="true" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595370 2570 flags.go:64] FLAG: --register-with-taints="" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595374 2570 flags.go:64] FLAG: --registry-burst="10" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595377 2570 flags.go:64] FLAG: --registry-qps="5" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595380 2570 flags.go:64] FLAG: --reserved-cpus="" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595383 2570 flags.go:64] FLAG: --reserved-memory="" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595386 2570 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595389 2570 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595392 2570 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595395 2570 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595398 2570 flags.go:64] FLAG: --runonce="false" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595400 2570 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595403 2570 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595406 2570 flags.go:64] FLAG: --seccomp-default="false" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595410 2570 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595415 2570 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595417 2570 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595420 2570 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595427 2570 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 13:38:39.601588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595430 2570 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595433 2570 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595436 2570 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595439 2570 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595442 2570 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595445 2570 flags.go:64] FLAG: --system-cgroups="" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595448 2570 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595454 2570 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595457 2570 flags.go:64] FLAG: --tls-cert-file="" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595460 2570 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595464 2570 flags.go:64] FLAG: --tls-min-version="" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595467 2570 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595470 2570 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595473 2570 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595476 2570 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595479 2570 flags.go:64] FLAG: --v="2" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595483 2570 flags.go:64] FLAG: --version="false" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595487 2570 flags.go:64] FLAG: --vmodule="" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595492 2570 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.595495 2570 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595593 2570 feature_gate.go:328] unrecognized feature gate: Example2 Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595597 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595600 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595605 2570 feature_gate.go:328] unrecognized feature gate: Example Mar 12 13:38:39.602297 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595608 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595610 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595614 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595633 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595639 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595642 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595645 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595649 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595652 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595655 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595657 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595660 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595663 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595666 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595669 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595671 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595674 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595677 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595679 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Mar 12 13:38:39.603207 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595682 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595684 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595687 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595689 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595692 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595695 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595697 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595700 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595702 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595705 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595707 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595710 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595712 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595715 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595718 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595721 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595723 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595727 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595741 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595744 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Mar 12 13:38:39.604082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595748 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595750 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595754 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595756 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595759 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595761 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595764 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595766 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595769 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595771 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595774 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595777 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595779 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595782 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595784 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595787 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595790 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595793 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595795 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595797 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 13:38:39.605012 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595800 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595802 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595805 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595808 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595810 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595813 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595816 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595818 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595821 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595825 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595827 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595831 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595835 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595837 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595840 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595842 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595845 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595847 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595850 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595853 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Mar 12 13:38:39.606053 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595855 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595857 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.595860 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.596741 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.604725 2570 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.604752 2570 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604829 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604839 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604847 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604855 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604860 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604865 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604869 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604875 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604881 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Mar 12 13:38:39.606590 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604886 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604892 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604896 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604902 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604907 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604912 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604916 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604921 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604928 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604932 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604937 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604942 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604947 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604952 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604957 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604961 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604965 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604970 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604974 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604979 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Mar 12 13:38:39.607108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604983 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604988 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604993 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.604998 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605002 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605006 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605010 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605014 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605018 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605022 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605026 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605030 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605034 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605040 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605044 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605048 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605052 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605056 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605061 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605065 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Mar 12 13:38:39.607833 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605069 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605074 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605079 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605084 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605088 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605092 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605096 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605100 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605104 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605108 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605112 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605116 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605120 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605124 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605129 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605134 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605138 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605142 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605146 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605150 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Mar 12 13:38:39.608777 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605154 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605158 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605162 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605166 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605170 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605174 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605178 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605182 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605186 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605190 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605194 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605198 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605202 2570 feature_gate.go:328] unrecognized feature gate: Example2 Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605207 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605213 2570 feature_gate.go:328] unrecognized feature gate: Example Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605217 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Mar 12 13:38:39.609419 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605221 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.605230 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605395 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605403 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605408 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605413 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605417 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605422 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605426 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605431 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605435 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605440 2570 feature_gate.go:328] unrecognized feature gate: Example2 Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605444 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605448 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605452 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Mar 12 13:38:39.610108 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605457 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605461 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605465 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605469 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605473 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605478 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605482 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605486 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605490 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605495 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605499 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605503 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605507 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605511 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605516 2570 feature_gate.go:328] unrecognized feature gate: Example Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605520 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605525 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605530 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605534 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605538 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Mar 12 13:38:39.610509 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605542 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605548 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605554 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605560 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605565 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605569 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605573 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605578 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605584 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605590 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605595 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605600 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605604 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605608 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605613 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605638 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605643 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605647 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605650 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 13:38:39.611116 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605654 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605658 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605661 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605665 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605669 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605673 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605677 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605681 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605684 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605688 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605694 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605698 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605702 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605707 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605711 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605715 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605719 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605723 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605727 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605732 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Mar 12 13:38:39.611647 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605736 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605740 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605743 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605747 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605751 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605755 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605759 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605763 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605768 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605772 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605776 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605780 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605783 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:39.605788 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.605796 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Mar 12 13:38:39.612210 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.606718 2570 server.go:962] "Client rotation is on, will bootstrap in background" Mar 12 13:38:39.612608 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.609787 2570 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 12 13:38:39.612608 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.610966 2570 server.go:1019] "Starting client certificate rotation" Mar 12 13:38:39.612608 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.611064 2570 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Mar 12 13:38:39.612608 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.611109 2570 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Mar 12 13:38:39.640855 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.640825 2570 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 13:38:39.643723 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.643699 2570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 13:38:39.664040 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.664005 2570 log.go:25] "Validated CRI v1 runtime API" Mar 12 13:38:39.668497 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.668467 2570 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Mar 12 13:38:39.669872 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.669852 2570 log.go:25] "Validated CRI v1 image API" Mar 12 13:38:39.671216 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.671194 2570 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 13:38:39.675586 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.675561 2570 fs.go:135] Filesystem UUIDs: map[21ff95f7-0d9e-4a71-b8fe-8ca92efddc5e:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2 b6f5cd5b-ab92-40ec-adfa-542e31a3725b:/dev/nvme0n1p4] Mar 12 13:38:39.675586 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.675584 2570 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Mar 12 13:38:39.681541 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.681408 2570 manager.go:217] Machine: {Timestamp:2026-03-12 13:38:39.679392674 +0000 UTC m=+0.454856308 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3101090 MemoryCapacity:32812163072 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2ee2e9989dcd2e8d2288aad76da37f SystemUUID:ec2ee2e9-989d-cd2e-8d22-88aad76da37f BootID:c0b08f08-05ee-417c-a51f-a61d00281d8c Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16406081536 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16406081536 Type:vfs Inodes:4005391 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6562435072 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6094848 Type:vfs Inodes:18446744073709551615 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:50:be:a7:fd:27 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:50:be:a7:fd:27 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:8a:2f:96:b2:80:cd Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:32812163072 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:34603008 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 13:38:39.681541 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.681529 2570 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 13:38:39.681688 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.681646 2570 manager.go:233] Version: {KernelVersion:5.14.0-570.96.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260303-1 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 13:38:39.682850 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.682818 2570 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 13:38:39.682996 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.682853 2570 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-142-111.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 13:38:39.683045 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.683006 2570 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 13:38:39.683045 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.683016 2570 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 13:38:39.683045 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.683029 2570 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 13:38:39.683133 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.683049 2570 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 13:38:39.684054 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.684042 2570 state_mem.go:36] "Initialized new in-memory state store" Mar 12 13:38:39.684172 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.684163 2570 server.go:1267] "Using root directory" path="/var/lib/kubelet" Mar 12 13:38:39.686481 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.686467 2570 kubelet.go:491] "Attempting to sync node with API server" Mar 12 13:38:39.686539 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.686487 2570 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 13:38:39.686539 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.686501 2570 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 13:38:39.686539 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.686512 2570 kubelet.go:397] "Adding apiserver pod source" Mar 12 13:38:39.686539 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.686533 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 13:38:39.687796 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.687779 2570 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Mar 12 13:38:39.687855 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.687808 2570 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Mar 12 13:38:39.691117 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.691100 2570 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.9-3.rhaos4.20.gitb9ac835.el9" apiVersion="v1" Mar 12 13:38:39.693443 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.693426 2570 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 13:38:39.695700 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695679 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 13:38:39.695764 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695715 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 13:38:39.695764 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695729 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 13:38:39.695764 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695741 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 13:38:39.695764 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695753 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 13:38:39.695895 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695765 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 13:38:39.695895 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695778 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 13:38:39.695895 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695791 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 13:38:39.695895 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695805 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 13:38:39.695895 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695818 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 13:38:39.695895 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695833 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 13:38:39.695895 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.695851 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 13:38:39.696721 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.696710 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 13:38:39.696774 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.696725 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Mar 12 13:38:39.696937 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.696909 2570 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-r4k88" Mar 12 13:38:39.699944 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.699907 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 13:38:39.700057 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.699943 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:39.700057 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.699953 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 13:38:39.700671 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.700659 2570 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 13:38:39.700722 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.700698 2570 server.go:1295] "Started kubelet" Mar 12 13:38:39.700864 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.700829 2570 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 13:38:39.700900 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.700855 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 13:38:39.700943 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.700930 2570 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 12 13:38:39.701575 ip-10-0-142-111 systemd[1]: Started Kubernetes Kubelet. Mar 12 13:38:39.702347 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.702281 2570 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 13:38:39.708558 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.708529 2570 server.go:317] "Adding debug handlers to kubelet server" Mar 12 13:38:39.713809 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.713785 2570 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Mar 12 13:38:39.714305 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.714289 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 13:38:39.715138 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.715115 2570 volume_manager.go:295] "The desired_state_of_world populator starts" Mar 12 13:38:39.715138 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.715140 2570 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 13:38:39.715261 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.715114 2570 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 13:38:39.715316 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.715257 2570 reconstruct.go:97] "Volume reconstruction finished" Mar 12 13:38:39.715316 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.715270 2570 reconciler.go:26] "Reconciler: start to sync state" Mar 12 13:38:39.715382 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.715343 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-111.ec2.internal\" not found" Mar 12 13:38:39.715880 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.715849 2570 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 12 13:38:39.715950 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.715920 2570 factory.go:55] Registering systemd factory Mar 12 13:38:39.715950 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.715935 2570 factory.go:223] Registration of the systemd container factory successfully Mar 12 13:38:39.716187 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.716171 2570 factory.go:153] Registering CRI-O factory Mar 12 13:38:39.716274 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.716192 2570 factory.go:223] Registration of the crio container factory successfully Mar 12 13:38:39.716274 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.716257 2570 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 13:38:39.716352 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.716285 2570 factory.go:103] Registering Raw factory Mar 12 13:38:39.716352 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.716302 2570 manager.go:1196] Started watching for new ooms in manager Mar 12 13:38:39.717645 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.717612 2570 manager.go:319] Starting recovery of all containers Mar 12 13:38:39.723164 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.722968 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Mar 12 13:38:39.723164 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.722961 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 13:38:39.724144 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.723102 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3be35adf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.700671199 +0000 UTC m=+0.476134832,LastTimestamp:2026-03-12 13:38:39.700671199 +0000 UTC m=+0.476134832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:39.728703 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.728682 2570 manager.go:324] Recovery completed Mar 12 13:38:39.733137 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.733121 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:39.735775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.735754 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:39.735880 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.735793 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:39.735880 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.735807 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:39.736327 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.736301 2570 cpu_manager.go:222] "Starting CPU manager" policy="none" Mar 12 13:38:39.736327 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.736316 2570 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Mar 12 13:38:39.736469 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.736336 2570 state_mem.go:36] "Initialized new in-memory state store" Mar 12 13:38:39.737760 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.737746 2570 policy_none.go:49] "None policy: Start" Mar 12 13:38:39.737836 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.737766 2570 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 13:38:39.737836 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.737779 2570 state_mem.go:35] "Initializing new in-memory state store" Mar 12 13:38:39.738675 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.738577 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735775037 +0000 UTC m=+0.511238678,LastTimestamp:2026-03-12 13:38:39.735775037 +0000 UTC m=+0.511238678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:39.749577 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.749495 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735799683 +0000 UTC m=+0.511263319,LastTimestamp:2026-03-12 13:38:39.735799683 +0000 UTC m=+0.511263319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:39.757254 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.757168 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735814407 +0000 UTC m=+0.511278041,LastTimestamp:2026-03-12 13:38:39.735814407 +0000 UTC m=+0.511278041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:39.776402 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.776383 2570 manager.go:341] "Starting Device Plugin manager" Mar 12 13:38:39.792209 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.776420 2570 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 13:38:39.792209 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.776430 2570 server.go:85] "Starting device plugin registration server" Mar 12 13:38:39.792209 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.776742 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 13:38:39.792209 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.776755 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 13:38:39.792209 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.776838 2570 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 13:38:39.792209 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.777023 2570 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 13:38:39.792209 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.777034 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 13:38:39.792209 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.777593 2570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Mar 12 13:38:39.792209 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.777658 2570 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-111.ec2.internal\" not found" Mar 12 13:38:39.792209 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.789277 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e408f99dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.77906838 +0000 UTC m=+0.554532002,LastTimestamp:2026-03-12 13:38:39.77906838 +0000 UTC m=+0.554532002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:39.859356 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.859259 2570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 13:38:39.860637 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.860604 2570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 13:38:39.860711 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.860648 2570 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 13:38:39.860711 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.860675 2570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 13:38:39.860711 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.860682 2570 kubelet.go:2451] "Starting kubelet main sync loop" Mar 12 13:38:39.860844 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.860718 2570 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 12 13:38:39.871796 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.871763 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 13:38:39.877028 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.877008 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:39.877917 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.877900 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:39.878008 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.877939 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:39.878008 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.877954 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:39.878008 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.877986 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:39.886339 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.886253 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735775037 +0000 UTC m=+0.511238678,LastTimestamp:2026-03-12 13:38:39.877920442 +0000 UTC m=+0.653384090,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:39.897752 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.897718 2570 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:39.897884 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.897692 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735799683 +0000 UTC m=+0.511263319,LastTimestamp:2026-03-12 13:38:39.877945651 +0000 UTC m=+0.653409289,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:39.911228 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.911138 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735814407 +0000 UTC m=+0.511278041,LastTimestamp:2026-03-12 13:38:39.877959632 +0000 UTC m=+0.653423267,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:39.932905 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.932868 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Mar 12 13:38:39.961171 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.961135 2570 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal"] Mar 12 13:38:39.961234 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.961224 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:39.963153 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.963134 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:39.963290 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.963165 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:39.963290 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.963175 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:39.964631 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.964604 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:39.965292 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.965275 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:39.965388 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.965306 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:39.965388 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.965315 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:39.965499 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.965482 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" Mar 12 13:38:39.965547 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.965538 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:39.966245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.966232 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:39.966314 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.966266 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:39.966314 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.966277 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:39.966684 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.966670 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal" Mar 12 13:38:39.966765 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.966701 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:39.967301 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.967284 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:39.967393 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.967312 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:39.967393 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:39.967325 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:39.972847 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.972739 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735775037 +0000 UTC m=+0.511238678,LastTimestamp:2026-03-12 13:38:39.963151247 +0000 UTC m=+0.738614880,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:39.983998 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.983908 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735799683 +0000 UTC m=+0.511263319,LastTimestamp:2026-03-12 13:38:39.963169446 +0000 UTC m=+0.738633080,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:39.988416 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.988379 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:39.993419 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.993398 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:39.996375 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:39.996296 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735814407 +0000 UTC m=+0.511278041,LastTimestamp:2026-03-12 13:38:39.963179407 +0000 UTC m=+0.738643040,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.012385 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.012295 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735775037 +0000 UTC m=+0.511238678,LastTimestamp:2026-03-12 13:38:39.965293193 +0000 UTC m=+0.740756836,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.016986 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.016962 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/56ea2cd715dd568d3d7e0aab566769bf-config\") pod \"kube-apiserver-proxy-ip-10-0-142-111.ec2.internal\" (UID: \"56ea2cd715dd568d3d7e0aab566769bf\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.017101 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.016994 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/0e4e8f3d30bf75c22161da0d94e78eb7-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal\" (UID: \"0e4e8f3d30bf75c22161da0d94e78eb7\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.017101 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.017037 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e4e8f3d30bf75c22161da0d94e78eb7-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal\" (UID: \"0e4e8f3d30bf75c22161da0d94e78eb7\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.034367 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.034267 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735799683 +0000 UTC m=+0.511263319,LastTimestamp:2026-03-12 13:38:39.965311139 +0000 UTC m=+0.740774773,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.044442 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.044359 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735814407 +0000 UTC m=+0.511278041,LastTimestamp:2026-03-12 13:38:39.965319363 +0000 UTC m=+0.740782997,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.056348 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.056266 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735775037 +0000 UTC m=+0.511238678,LastTimestamp:2026-03-12 13:38:39.966245694 +0000 UTC m=+0.741709328,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.070351 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.070267 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735799683 +0000 UTC m=+0.511263319,LastTimestamp:2026-03-12 13:38:39.966270868 +0000 UTC m=+0.741734502,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.078143 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.078052 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735814407 +0000 UTC m=+0.511278041,LastTimestamp:2026-03-12 13:38:39.966281169 +0000 UTC m=+0.741744803,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.091942 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.091836 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735775037 +0000 UTC m=+0.511238678,LastTimestamp:2026-03-12 13:38:39.967297006 +0000 UTC m=+0.742760645,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.097960 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.097934 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:40.098841 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.098823 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:40.098966 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.098861 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:40.098966 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.098878 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:40.098966 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.098920 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.109495 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.109413 2570 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.109613 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.109433 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735799683 +0000 UTC m=+0.511263319,LastTimestamp:2026-03-12 13:38:39.967317878 +0000 UTC m=+0.742781515,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.113337 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.113259 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735814407 +0000 UTC m=+0.511278041,LastTimestamp:2026-03-12 13:38:39.967330728 +0000 UTC m=+0.742794367,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.117958 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.117931 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/0e4e8f3d30bf75c22161da0d94e78eb7-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal\" (UID: \"0e4e8f3d30bf75c22161da0d94e78eb7\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.118082 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.117965 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e4e8f3d30bf75c22161da0d94e78eb7-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal\" (UID: \"0e4e8f3d30bf75c22161da0d94e78eb7\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.118082 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.117997 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/56ea2cd715dd568d3d7e0aab566769bf-config\") pod \"kube-apiserver-proxy-ip-10-0-142-111.ec2.internal\" (UID: \"56ea2cd715dd568d3d7e0aab566769bf\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.118082 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.118039 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e4e8f3d30bf75c22161da0d94e78eb7-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal\" (UID: \"0e4e8f3d30bf75c22161da0d94e78eb7\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.118082 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.118038 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/0e4e8f3d30bf75c22161da0d94e78eb7-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal\" (UID: \"0e4e8f3d30bf75c22161da0d94e78eb7\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.118286 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.118038 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/56ea2cd715dd568d3d7e0aab566769bf-config\") pod \"kube-apiserver-proxy-ip-10-0-142-111.ec2.internal\" (UID: \"56ea2cd715dd568d3d7e0aab566769bf\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.120291 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.120220 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735775037 +0000 UTC m=+0.511238678,LastTimestamp:2026-03-12 13:38:40.098842161 +0000 UTC m=+0.874305798,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.124848 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.124771 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735799683 +0000 UTC m=+0.511263319,LastTimestamp:2026-03-12 13:38:40.098867724 +0000 UTC m=+0.874331358,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.131394 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.131314 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb9907 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735814407 +0000 UTC m=+0.511278041,LastTimestamp:2026-03-12 13:38:40.098883258 +0000 UTC m=+0.874346893,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.292070 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.292034 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.295847 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.295822 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.335679 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.335647 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Mar 12 13:38:40.510288 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.510199 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:40.511272 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.511251 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:40.511403 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.511285 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:40.511403 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.511295 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:40.511403 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.511325 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.519012 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.518920 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfaff3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735775037 +0000 UTC m=+0.511238678,LastTimestamp:2026-03-12 13:38:40.511271085 +0000 UTC m=+1.286734718,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.529280 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.529243 2570 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:40.529280 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.529189 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-111.ec2.internal.189c1b9e3dfb5f83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-111.ec2.internal,UID:ip-10-0-142-111.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-111.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:39.735799683 +0000 UTC m=+0.511263319,LastTimestamp:2026-03-12 13:38:40.511289481 +0000 UTC m=+1.286753115,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.555172 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.555131 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 13:38:40.708456 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.708418 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:40.869304 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.869262 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 13:38:40.952440 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:40.952389 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ea2cd715dd568d3d7e0aab566769bf.slice/crio-8f7e9db0b1548454ddbe6700759f05e6ccebda57c06ae44a3ccf74ccd2006c83 WatchSource:0}: Error finding container 8f7e9db0b1548454ddbe6700759f05e6ccebda57c06ae44a3ccf74ccd2006c83: Status 404 returned error can't find the container with id 8f7e9db0b1548454ddbe6700759f05e6ccebda57c06ae44a3ccf74ccd2006c83 Mar 12 13:38:40.953013 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:38:40.952985 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e4e8f3d30bf75c22161da0d94e78eb7.slice/crio-cff3c1f42339c58f3165aa6d33f5acbb57fcca63909d59ab511eb0c3f8f20e7b WatchSource:0}: Error finding container cff3c1f42339c58f3165aa6d33f5acbb57fcca63909d59ab511eb0c3f8f20e7b: Status 404 returned error can't find the container with id cff3c1f42339c58f3165aa6d33f5acbb57fcca63909d59ab511eb0c3f8f20e7b Mar 12 13:38:40.960577 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:40.960552 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:38:40.973273 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.973186 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9e86ffffdf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7953a8e77c5bb0efaa670ba2188050740eba0e1f6979248832c05cb886fa5d0\",Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:40.960839647 +0000 UTC m=+1.736303271,LastTimestamp:2026-03-12 13:38:40.960839647 +0000 UTC m=+1.736303271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:40.981075 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:40.980964 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-142-111.ec2.internal.189c1b9e870118e1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-142-111.ec2.internal,UID:56ea2cd715dd568d3d7e0aab566769bf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce5935b86fb6e92713f31e23166028e56f491ef14d756d8deead5c1455e73537\",Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:40.960911585 +0000 UTC m=+1.736375222,LastTimestamp:2026-03-12 13:38:40.960911585 +0000 UTC m=+1.736375222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:41.047107 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:41.047072 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 13:38:41.094654 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:41.094561 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 13:38:41.144010 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:41.143973 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Mar 12 13:38:41.330247 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:41.330214 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:41.331741 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:41.331723 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:41.331805 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:41.331759 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:41.331805 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:41.331770 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:41.331805 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:41.331799 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:41.352548 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:41.352474 2570 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:41.710335 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:41.710239 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:41.866976 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:41.866911 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal" event={"ID":"56ea2cd715dd568d3d7e0aab566769bf","Type":"ContainerStarted","Data":"8f7e9db0b1548454ddbe6700759f05e6ccebda57c06ae44a3ccf74ccd2006c83"} Mar 12 13:38:41.868399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:41.868371 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" event={"ID":"0e4e8f3d30bf75c22161da0d94e78eb7","Type":"ContainerStarted","Data":"cff3c1f42339c58f3165aa6d33f5acbb57fcca63909d59ab511eb0c3f8f20e7b"} Mar 12 13:38:42.504831 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:42.504784 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 13:38:42.714017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:42.713983 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:42.755742 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:42.755609 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Mar 12 13:38:42.806213 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:42.806169 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 13:38:42.924863 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:42.924751 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-142-111.ec2.internal.189c1b9efb6eb334 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-142-111.ec2.internal,UID:56ea2cd715dd568d3d7e0aab566769bf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce5935b86fb6e92713f31e23166028e56f491ef14d756d8deead5c1455e73537\" in 1.953s (1.953s including waiting). Image size: 488265290 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:42.914251572 +0000 UTC m=+3.689715193,LastTimestamp:2026-03-12 13:38:42.914251572 +0000 UTC m=+3.689715193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:42.932150 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:42.932063 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9efb806e87 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7953a8e77c5bb0efaa670ba2188050740eba0e1f6979248832c05cb886fa5d0\" in 1.954s (1.954s including waiting). Image size: 468358966 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:42.915413639 +0000 UTC m=+3.690877284,LastTimestamp:2026-03-12 13:38:42.915413639 +0000 UTC m=+3.690877284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:42.953314 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:42.953285 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:42.956237 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:42.955835 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:42.956237 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:42.955878 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:42.956237 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:42.955895 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:42.956237 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:42.955935 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:42.975664 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:42.975640 2570 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:42.999211 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:42.999126 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-142-111.ec2.internal.189c1b9effe8afc4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-142-111.ec2.internal,UID:56ea2cd715dd568d3d7e0aab566769bf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Created,Message:Created container: haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:42.989354948 +0000 UTC m=+3.764818582,LastTimestamp:2026-03-12 13:38:42.989354948 +0000 UTC m=+3.764818582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:43.010653 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:43.010489 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-142-111.ec2.internal.189c1b9f005342bd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-142-111.ec2.internal,UID:56ea2cd715dd568d3d7e0aab566769bf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Started,Message:Started container haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:42.996339389 +0000 UTC m=+3.771803024,LastTimestamp:2026-03-12 13:38:42.996339389 +0000 UTC m=+3.771803024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:43.295908 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:43.295873 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 13:38:43.565896 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:43.565799 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f215e77e1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:43.550722017 +0000 UTC m=+4.326185657,LastTimestamp:2026-03-12 13:38:43.550722017 +0000 UTC m=+4.326185657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:43.575987 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:43.575902 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f21dfd35c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:43.55919958 +0000 UTC m=+4.334663215,LastTimestamp:2026-03-12 13:38:43.55919958 +0000 UTC m=+4.334663215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:43.710808 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.710775 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:43.872474 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.872390 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal" event={"ID":"56ea2cd715dd568d3d7e0aab566769bf","Type":"ContainerStarted","Data":"61261db34bdb907b1c45b65fc852ec9700ef6c1b0049259fb68ba2ee9b3bacc3"} Mar 12 13:38:43.872474 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.872430 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:43.873296 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.873279 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:43.873351 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.873306 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:43.873351 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.873319 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:43.873506 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:43.873491 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:43.873818 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.873797 2570 generic.go:358] "Generic (PLEG): container finished" podID="0e4e8f3d30bf75c22161da0d94e78eb7" containerID="4d3adf48172502ca79b8ec4cedaa5740871d1e0f9272bca78a7bd33f137e4c5f" exitCode=0 Mar 12 13:38:43.873867 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.873829 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" event={"ID":"0e4e8f3d30bf75c22161da0d94e78eb7","Type":"ContainerDied","Data":"4d3adf48172502ca79b8ec4cedaa5740871d1e0f9272bca78a7bd33f137e4c5f"} Mar 12 13:38:43.873902 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.873878 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:43.874559 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.874541 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:43.874653 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.874570 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:43.874653 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:43.874582 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:43.874767 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:43.874756 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:43.890038 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:43.889954 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f34d09298 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7953a8e77c5bb0efaa670ba2188050740eba0e1f6979248832c05cb886fa5d0\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:43.876967064 +0000 UTC m=+4.652430706,LastTimestamp:2026-03-12 13:38:43.876967064 +0000 UTC m=+4.652430706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:44.009467 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:44.009374 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f3c18c40f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:43.999138831 +0000 UTC m=+4.774602465,LastTimestamp:2026-03-12 13:38:43.999138831 +0000 UTC m=+4.774602465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:44.009662 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:44.009509 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 13:38:44.021055 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:44.020694 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f3c97b6ac openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:44.007458476 +0000 UTC m=+4.782922112,LastTimestamp:2026-03-12 13:38:44.007458476 +0000 UTC m=+4.782922112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:44.709735 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.709702 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:44.876960 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.876930 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/0.log" Mar 12 13:38:44.877376 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.877281 2570 generic.go:358] "Generic (PLEG): container finished" podID="0e4e8f3d30bf75c22161da0d94e78eb7" containerID="0e87bed21d710bc3e356211f90cf7ce77f85efdfe8ca8d9c051bb564f3b2579d" exitCode=1 Mar 12 13:38:44.877376 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.877363 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:44.877376 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.877364 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" event={"ID":"0e4e8f3d30bf75c22161da0d94e78eb7","Type":"ContainerDied","Data":"0e87bed21d710bc3e356211f90cf7ce77f85efdfe8ca8d9c051bb564f3b2579d"} Mar 12 13:38:44.877491 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.877363 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:44.879245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.879225 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:44.879340 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.879262 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:44.879340 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.879280 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:44.879340 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.879292 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:44.879340 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.879265 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:44.879481 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.879355 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:44.879534 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:44.879521 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:44.879579 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:44.879559 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:44.879611 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:44.879600 2570 scope.go:117] "RemoveContainer" containerID="0e87bed21d710bc3e356211f90cf7ce77f85efdfe8ca8d9c051bb564f3b2579d" Mar 12 13:38:44.889997 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:44.889883 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f34d09298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f34d09298 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7953a8e77c5bb0efaa670ba2188050740eba0e1f6979248832c05cb886fa5d0\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:43.876967064 +0000 UTC m=+4.652430706,LastTimestamp:2026-03-12 13:38:44.881488988 +0000 UTC m=+5.656952625,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:44.998440 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:44.998356 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f3c18c40f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f3c18c40f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:43.999138831 +0000 UTC m=+4.774602465,LastTimestamp:2026-03-12 13:38:44.987839974 +0000 UTC m=+5.763303619,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:45.011669 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:45.011559 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f3c97b6ac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9f3c97b6ac openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:44.007458476 +0000 UTC m=+4.782922112,LastTimestamp:2026-03-12 13:38:44.996784043 +0000 UTC m=+5.772247664,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:45.710981 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.710953 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:45.879637 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.879593 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/1.log" Mar 12 13:38:45.880049 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.880004 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/0.log" Mar 12 13:38:45.880334 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.880312 2570 generic.go:358] "Generic (PLEG): container finished" podID="0e4e8f3d30bf75c22161da0d94e78eb7" containerID="5ccc5f9aa6ffb0b4b771e850d5999ff9c414ed51f808e8bc4656b9499a739299" exitCode=1 Mar 12 13:38:45.880400 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.880349 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" event={"ID":"0e4e8f3d30bf75c22161da0d94e78eb7","Type":"ContainerDied","Data":"5ccc5f9aa6ffb0b4b771e850d5999ff9c414ed51f808e8bc4656b9499a739299"} Mar 12 13:38:45.880400 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.880382 2570 scope.go:117] "RemoveContainer" containerID="0e87bed21d710bc3e356211f90cf7ce77f85efdfe8ca8d9c051bb564f3b2579d" Mar 12 13:38:45.880643 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.880444 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:45.881253 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.881237 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:45.881331 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.881269 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:45.881331 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.881282 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:45.883395 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:45.881863 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:45.883395 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:45.881938 2570 scope.go:117] "RemoveContainer" containerID="5ccc5f9aa6ffb0b4b771e850d5999ff9c414ed51f808e8bc4656b9499a739299" Mar 12 13:38:45.883832 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:45.883800 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_openshift-machine-config-operator(0e4e8f3d30bf75c22161da0d94e78eb7)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" podUID="0e4e8f3d30bf75c22161da0d94e78eb7" Mar 12 13:38:45.898581 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:45.898490 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9fac6d9294 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_openshift-machine-config-operator(0e4e8f3d30bf75c22161da0d94e78eb7),Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:45.883744916 +0000 UTC m=+6.659208554,LastTimestamp:2026-03-12 13:38:45.883744916 +0000 UTC m=+6.659208554,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:45.966386 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:45.966298 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Mar 12 13:38:46.176524 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.176480 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:46.177545 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.177530 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:46.177612 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.177563 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:46.177612 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.177579 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:46.177721 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.177635 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:46.196864 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:46.196833 2570 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:46.710972 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.710936 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:46.883447 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.883427 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/1.log" Mar 12 13:38:46.883871 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.883857 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:46.884703 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.884685 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:46.884789 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.884719 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:46.884789 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.884731 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:46.884981 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:46.884968 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:46.885029 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:46.885020 2570 scope.go:117] "RemoveContainer" containerID="5ccc5f9aa6ffb0b4b771e850d5999ff9c414ed51f808e8bc4656b9499a739299" Mar 12 13:38:46.885161 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:46.885145 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_openshift-machine-config-operator(0e4e8f3d30bf75c22161da0d94e78eb7)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" podUID="0e4e8f3d30bf75c22161da0d94e78eb7" Mar 12 13:38:46.894980 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:46.894898 2570 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9fac6d9294\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal.189c1b9fac6d9294 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal,UID:0e4e8f3d30bf75c22161da0d94e78eb7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_openshift-machine-config-operator(0e4e8f3d30bf75c22161da0d94e78eb7),Source:EventSource{Component:kubelet,Host:ip-10-0-142-111.ec2.internal,},FirstTimestamp:2026-03-12 13:38:45.883744916 +0000 UTC m=+6.659208554,LastTimestamp:2026-03-12 13:38:46.885117055 +0000 UTC m=+7.660580695,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-111.ec2.internal,}" Mar 12 13:38:47.332568 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:47.332528 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 13:38:47.711812 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:47.711730 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:47.738453 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:47.738419 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 13:38:47.826943 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:47.826905 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 13:38:48.714364 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:48.714325 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:48.721782 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:48.721756 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 13:38:49.711454 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:49.711418 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:49.778148 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:49.778114 2570 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-111.ec2.internal\" not found" Mar 12 13:38:50.712560 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:50.712524 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:51.709079 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:51.709042 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:52.377404 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:52.377360 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 13:38:52.597653 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:52.597601 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:52.598586 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:52.598567 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:52.598708 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:52.598597 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:52.598708 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:52.598607 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:52.598708 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:52.598652 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:52.618676 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:52.618642 2570 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-111.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:52.711582 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:52.711499 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:53.711594 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:53.711563 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:54.708784 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:54.708746 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:55.712800 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:55.712762 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-111.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:38:55.848844 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:55.848816 2570 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-r4k88" Mar 12 13:38:56.374599 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:56.374563 2570 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Mar 12 13:38:56.456237 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:56.456201 2570 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Mar 12 13:38:56.611213 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:56.611182 2570 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 12 13:38:56.611373 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:56.611357 2570 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Mar 12 13:38:56.611431 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:56.611383 2570 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Mar 12 13:38:56.714595 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:56.714511 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:56.730294 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:56.730265 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:56.787670 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:56.787646 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:56.849784 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:56.849748 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-03-11 13:33:55 +0000 UTC" deadline="2027-10-12 05:32:02.23949573 +0000 UTC" Mar 12 13:38:56.849784 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:56.849781 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="13887h53m5.389717976s" Mar 12 13:38:57.062800 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.062774 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:57.062800 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:57.062803 2570 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:57.098157 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.098132 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:57.113245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.113220 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:57.169848 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.169818 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:57.443522 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.443437 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:57.443522 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:57.443463 2570 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:57.712194 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.712114 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:57.727338 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.727308 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:57.787098 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.787071 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:57.861782 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.861747 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:57.862728 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.862704 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:57.862796 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.862740 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:57.862796 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.862750 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:57.862980 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:57.862968 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:57.863030 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:57.863021 2570 scope.go:117] "RemoveContainer" containerID="5ccc5f9aa6ffb0b4b771e850d5999ff9c414ed51f808e8bc4656b9499a739299" Mar 12 13:38:58.056199 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.056163 2570 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:58.056199 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:58.056193 2570 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-142-111.ec2.internal" not found Mar 12 13:38:58.903134 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.903104 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:38:58.903545 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.903493 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/1.log" Mar 12 13:38:58.903821 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.903792 2570 generic.go:358] "Generic (PLEG): container finished" podID="0e4e8f3d30bf75c22161da0d94e78eb7" containerID="e77f11cc724be8a876826c4f490d73ef5288a80560406784962d9bd0814a9dd4" exitCode=1 Mar 12 13:38:58.903955 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.903828 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" event={"ID":"0e4e8f3d30bf75c22161da0d94e78eb7","Type":"ContainerDied","Data":"e77f11cc724be8a876826c4f490d73ef5288a80560406784962d9bd0814a9dd4"} Mar 12 13:38:58.903955 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.903855 2570 scope.go:117] "RemoveContainer" containerID="5ccc5f9aa6ffb0b4b771e850d5999ff9c414ed51f808e8bc4656b9499a739299" Mar 12 13:38:58.904060 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.903995 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:58.905339 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.905027 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:58.905339 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.905058 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:58.905339 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.905068 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:58.905339 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:58.905302 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:58.905523 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:58.905346 2570 scope.go:117] "RemoveContainer" containerID="e77f11cc724be8a876826c4f490d73ef5288a80560406784962d9bd0814a9dd4" Mar 12 13:38:58.905523 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:58.905466 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_openshift-machine-config-operator(0e4e8f3d30bf75c22161da0d94e78eb7)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" podUID="0e4e8f3d30bf75c22161da0d94e78eb7" Mar 12 13:38:59.307531 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.307501 2570 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Mar 12 13:38:59.384898 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:59.384866 2570 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-142-111.ec2.internal\" not found" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:59.619464 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.619394 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:38:59.620409 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.620392 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientMemory" Mar 12 13:38:59.620506 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.620431 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasNoDiskPressure" Mar 12 13:38:59.620506 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.620442 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeHasSufficientPID" Mar 12 13:38:59.620506 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.620471 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:59.629328 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.629299 2570 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-142-111.ec2.internal" Mar 12 13:38:59.629328 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:59.629332 2570 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-142-111.ec2.internal\": node \"ip-10-0-142-111.ec2.internal\" not found" Mar 12 13:38:59.650036 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:59.650002 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-111.ec2.internal\" not found" Mar 12 13:38:59.727004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.726977 2570 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Mar 12 13:38:59.739565 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.739539 2570 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Mar 12 13:38:59.750149 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:59.750123 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-111.ec2.internal\" not found" Mar 12 13:38:59.775307 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.775284 2570 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-vmdpp" Mar 12 13:38:59.778418 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:59.778402 2570 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-111.ec2.internal\" not found" Mar 12 13:38:59.787336 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.787309 2570 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-vmdpp" Mar 12 13:38:59.850697 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:59.850664 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-111.ec2.internal\" not found" Mar 12 13:38:59.906143 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:38:59.906070 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:38:59.950946 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:38:59.950909 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-111.ec2.internal\" not found" Mar 12 13:39:00.051340 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:00.051301 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-111.ec2.internal\" not found" Mar 12 13:39:00.119761 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.119736 2570 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Mar 12 13:39:00.214693 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.214584 2570 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" Mar 12 13:39:00.245100 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.245075 2570 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 13:39:00.245233 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.245221 2570 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal" Mar 12 13:39:00.268662 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.268636 2570 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 13:39:00.699896 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.699860 2570 apiserver.go:52] "Watching apiserver" Mar 12 13:39:00.706030 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.706003 2570 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Mar 12 13:39:00.707377 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.707353 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-rcndk","openshift-multus/multus-additional-cni-plugins-qbtlm","openshift-network-diagnostics/network-check-target-6hlfq","openshift-ovn-kubernetes/ovnkube-node-h9fnd","kube-system/konnectivity-agent-m8bsl","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal","openshift-multus/multus-27rkj","openshift-multus/network-metrics-daemon-md2rq","openshift-network-operator/iptables-alerter-w2hbj","kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal","openshift-cluster-node-tuning-operator/tuned-qztxh","openshift-dns/node-resolver-z2zhm"] Mar 12 13:39:00.709779 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.709756 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:00.711028 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.711006 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.712200 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.712180 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:00.712303 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:00.712257 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:00.713481 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.713463 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.713912 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.713894 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Mar 12 13:39:00.714020 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.713894 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Mar 12 13:39:00.714399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.714379 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Mar 12 13:39:00.714498 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.714424 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-zxwpr\"" Mar 12 13:39:00.714750 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.714716 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:00.714887 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.714851 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Mar 12 13:39:00.714977 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.714884 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Mar 12 13:39:00.716201 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.716183 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717053 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-s9hkx\"" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717155 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717288 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717326 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717384 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717509 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-z88dg\"" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717544 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717568 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717653 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-8w8rg\"" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717729 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717791 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.717973 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.717980 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Mar 12 13:39:00.718526 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.718105 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Mar 12 13:39:00.718526 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.718203 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Mar 12 13:39:00.718526 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.718292 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Mar 12 13:39:00.719724 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.719706 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:00.719811 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:00.719763 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:00.721055 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.721033 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Mar 12 13:39:00.721181 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.721163 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Mar 12 13:39:00.721771 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.721586 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-2jq4m\"" Mar 12 13:39:00.721771 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.721770 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Mar 12 13:39:00.721951 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.721932 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:00.725075 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.722280 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-4s45x\"" Mar 12 13:39:00.725075 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.722762 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Mar 12 13:39:00.725075 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.723610 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.725302 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725130 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Mar 12 13:39:00.725302 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725158 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Mar 12 13:39:00.725407 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725327 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-registration-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.725407 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725366 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkswq\" (UniqueName: \"kubernetes.io/projected/a778e2cf-6292-41a8-a8e6-44ba43631c82-kube-api-access-rkswq\") pod \"iptables-alerter-w2hbj\" (UID: \"a778e2cf-6292-41a8-a8e6-44ba43631c82\") " pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:00.725407 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725398 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.725531 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725427 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-node-log\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.725531 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725452 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Mar 12 13:39:00.725531 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725491 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-run-ovn-kubernetes\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.725697 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725579 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.725697 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725613 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb6jj\" (UniqueName: \"kubernetes.io/projected/5acc1851-6633-49b2-88c3-177e3bea26af-kube-api-access-lb6jj\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.725697 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725653 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Mar 12 13:39:00.725697 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725691 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-system-cni-dir\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.725878 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725695 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Mar 12 13:39:00.725878 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725655 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-2xpcw\"" Mar 12 13:39:00.725878 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725764 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-socket-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.725878 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725790 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-wq45j\"" Mar 12 13:39:00.725878 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725809 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a778e2cf-6292-41a8-a8e6-44ba43631c82-iptables-alerter-script\") pod \"iptables-alerter-w2hbj\" (UID: \"a778e2cf-6292-41a8-a8e6-44ba43631c82\") " pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:00.725878 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725854 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ac5590a-ef07-4cda-8357-78aae27ac5e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.726118 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725890 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7ac5590a-ef07-4cda-8357-78aae27ac5e8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.726118 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725918 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7sx\" (UniqueName: \"kubernetes.io/projected/7ac5590a-ef07-4cda-8357-78aae27ac5e8-kube-api-access-cf7sx\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.726118 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725939 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-run-openvswitch\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726118 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.725970 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-slash\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726118 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726010 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-log-socket\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726118 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726038 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5acc1851-6633-49b2-88c3-177e3bea26af-ovn-node-metrics-cert\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726118 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726068 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a778e2cf-6292-41a8-a8e6-44ba43631c82-host-slash\") pod \"iptables-alerter-w2hbj\" (UID: \"a778e2cf-6292-41a8-a8e6-44ba43631c82\") " pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:00.726118 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726091 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-kubelet-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.726436 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726138 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-cnibin\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.726436 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726201 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-kubelet\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726436 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726239 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-cni-bin\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726436 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726295 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b2434af6-7e97-4039-9604-9310288bca08-agent-certs\") pod \"konnectivity-agent-m8bsl\" (UID: \"b2434af6-7e97-4039-9604-9310288bca08\") " pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:00.726436 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726336 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-var-lib-openvswitch\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726436 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726366 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:00.726436 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726381 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-run-ovn\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726436 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726421 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-cni-netd\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726801 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726445 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5acc1851-6633-49b2-88c3-177e3bea26af-ovnkube-config\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726801 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726472 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b2434af6-7e97-4039-9604-9310288bca08-konnectivity-ca\") pod \"konnectivity-agent-m8bsl\" (UID: \"b2434af6-7e97-4039-9604-9310288bca08\") " pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:00.726801 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726525 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-device-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.726801 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726578 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-etc-openvswitch\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726801 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726608 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5acc1851-6633-49b2-88c3-177e3bea26af-ovnkube-script-lib\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.726801 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726667 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-etc-selinux\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.726801 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726693 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-os-release\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.726801 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726714 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7ac5590a-ef07-4cda-8357-78aae27ac5e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.727666 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.726743 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74kk\" (UniqueName: \"kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk\") pod \"network-check-target-6hlfq\" (UID: \"6ae56213-c71d-4f84-b4f2-b7874b87ad3d\") " pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:00.727768 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.727682 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-systemd-units\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.727768 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.727701 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-run-netns\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.727768 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.727715 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-run-systemd\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.727916 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.727770 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-sys-fs\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.727916 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.727809 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74hvc\" (UniqueName: \"kubernetes.io/projected/0f517242-13ab-4998-9d96-faab59766b3b-kube-api-access-74hvc\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.727916 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.727833 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5acc1851-6633-49b2-88c3-177e3bea26af-env-overrides\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.728099 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.727948 2570 scope.go:117] "RemoveContainer" containerID="e77f11cc724be8a876826c4f490d73ef5288a80560406784962d9bd0814a9dd4" Mar 12 13:39:00.728156 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:00.728126 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_openshift-machine-config-operator(0e4e8f3d30bf75c22161da0d94e78eb7)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" podUID="0e4e8f3d30bf75c22161da0d94e78eb7" Mar 12 13:39:00.730610 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.730592 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Mar 12 13:39:00.730721 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.730634 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-lmr5h\"" Mar 12 13:39:00.730775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.730752 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Mar 12 13:39:00.789375 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.789337 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-03-11 13:33:59 +0000 UTC" deadline="2027-12-07 08:17:14.051639966 +0000 UTC" Mar 12 13:39:00.789375 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.789369 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15234h38m13.262273653s" Mar 12 13:39:00.810602 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.810555 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-111.ec2.internal" podStartSLOduration=0.810541695 podStartE2EDuration="810.541695ms" podCreationTimestamp="2026-03-12 13:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:39:00.810154584 +0000 UTC m=+21.585618226" watchObservedRunningTime="2026-03-12 13:39:00.810541695 +0000 UTC m=+21.586005337" Mar 12 13:39:00.815849 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.815833 2570 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 13:39:00.828473 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828444 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bcj7\" (UniqueName: \"kubernetes.io/projected/60c99a96-5455-4303-ab66-b21a59d9c105-kube-api-access-5bcj7\") pod \"node-ca-rcndk\" (UID: \"60c99a96-5455-4303-ab66-b21a59d9c105\") " pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:00.828654 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828479 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-socket-dir-parent\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.828654 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828499 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-var-lib-cni-multus\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.828654 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828516 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-var-lib-kubelet\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.828654 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828536 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-etc-selinux\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.828654 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828555 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-os-release\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.828654 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828572 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7ac5590a-ef07-4cda-8357-78aae27ac5e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.828654 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828615 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-run-systemd\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828672 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-os-release\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828687 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-run-systemd\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828676 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-kubernetes\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828712 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-etc-selinux\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828722 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-os-release\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828768 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-cni-binary-copy\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828794 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-sysctl-d\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828819 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-cnibin\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828843 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-var-lib-cni-bin\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828877 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-var-lib-kubelet\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828901 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-etc-kubernetes\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828931 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-registration-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828959 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.829017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.828986 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-node-log\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829032 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-run-ovn-kubernetes\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829059 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829062 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-node-log\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829085 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-host\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829098 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7ac5590a-ef07-4cda-8357-78aae27ac5e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829115 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829119 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-run-ovn-kubernetes\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829137 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkjq2\" (UniqueName: \"kubernetes.io/projected/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-kube-api-access-wkjq2\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829126 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829135 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-registration-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829167 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ac5590a-ef07-4cda-8357-78aae27ac5e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829196 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7ac5590a-ef07-4cda-8357-78aae27ac5e8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829216 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-slash\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829240 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-sysconfig\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829269 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-lib-modules\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.829635 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829297 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-slash\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829321 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-run-multus-certs\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829349 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a778e2cf-6292-41a8-a8e6-44ba43631c82-host-slash\") pod \"iptables-alerter-w2hbj\" (UID: \"a778e2cf-6292-41a8-a8e6-44ba43631c82\") " pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829367 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b2434af6-7e97-4039-9604-9310288bca08-agent-certs\") pod \"konnectivity-agent-m8bsl\" (UID: \"b2434af6-7e97-4039-9604-9310288bca08\") " pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829384 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-systemd\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829409 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a778e2cf-6292-41a8-a8e6-44ba43631c82-host-slash\") pod \"iptables-alerter-w2hbj\" (UID: \"a778e2cf-6292-41a8-a8e6-44ba43631c82\") " pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829449 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-sys\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829548 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-system-cni-dir\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829565 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-cni-netd\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829591 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e2915633-eccd-4769-8960-a86012fad6da-tmp-dir\") pod \"node-resolver-z2zhm\" (UID: \"e2915633-eccd-4769-8960-a86012fad6da\") " pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829613 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-cni-netd\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829657 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/7ac5590a-ef07-4cda-8357-78aae27ac5e8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829667 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-modprobe-d\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829693 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c00809b-e1dd-43f1-a58f-fc0a53b67729-tmp\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829724 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-device-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829753 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-etc-openvswitch\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829774 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-device-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.830471 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829779 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5acc1851-6633-49b2-88c3-177e3bea26af-ovnkube-script-lib\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829790 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-etc-openvswitch\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829793 2570 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829814 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-run\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829896 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d74kk\" (UniqueName: \"kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk\") pod \"network-check-target-6hlfq\" (UID: \"6ae56213-c71d-4f84-b4f2-b7874b87ad3d\") " pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829915 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-systemd-units\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829948 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-run-netns\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829973 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-sys-fs\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.829997 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-systemd-units\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830007 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-run-netns\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830022 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-74hvc\" (UniqueName: \"kubernetes.io/projected/0f517242-13ab-4998-9d96-faab59766b3b-kube-api-access-74hvc\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830048 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5acc1851-6633-49b2-88c3-177e3bea26af-env-overrides\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830075 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-sys-fs\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830082 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-hostroot\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830113 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-daemon-config\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830136 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rkswq\" (UniqueName: \"kubernetes.io/projected/a778e2cf-6292-41a8-a8e6-44ba43631c82-kube-api-access-rkswq\") pod \"iptables-alerter-w2hbj\" (UID: \"a778e2cf-6292-41a8-a8e6-44ba43631c82\") " pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830162 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lb6jj\" (UniqueName: \"kubernetes.io/projected/5acc1851-6633-49b2-88c3-177e3bea26af-kube-api-access-lb6jj\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830188 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr4th\" (UniqueName: \"kubernetes.io/projected/e2915633-eccd-4769-8960-a86012fad6da-kube-api-access-lr4th\") pod \"node-resolver-z2zhm\" (UID: \"e2915633-eccd-4769-8960-a86012fad6da\") " pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830212 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e2915633-eccd-4769-8960-a86012fad6da-hosts-file\") pod \"node-resolver-z2zhm\" (UID: \"e2915633-eccd-4769-8960-a86012fad6da\") " pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830238 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72sbp\" (UniqueName: \"kubernetes.io/projected/0c00809b-e1dd-43f1-a58f-fc0a53b67729-kube-api-access-72sbp\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830263 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-conf-dir\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830288 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-system-cni-dir\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830317 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-socket-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830308 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ac5590a-ef07-4cda-8357-78aae27ac5e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830344 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a778e2cf-6292-41a8-a8e6-44ba43631c82-iptables-alerter-script\") pod \"iptables-alerter-w2hbj\" (UID: \"a778e2cf-6292-41a8-a8e6-44ba43631c82\") " pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830355 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5acc1851-6633-49b2-88c3-177e3bea26af-ovnkube-script-lib\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830371 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cf7sx\" (UniqueName: \"kubernetes.io/projected/7ac5590a-ef07-4cda-8357-78aae27ac5e8-kube-api-access-cf7sx\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830442 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-run-openvswitch\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830447 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5acc1851-6633-49b2-88c3-177e3bea26af-env-overrides\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830488 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-run-netns\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830494 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-run-openvswitch\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830507 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-system-cni-dir\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830513 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-socket-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830559 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-log-socket\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.831775 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830590 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5acc1851-6633-49b2-88c3-177e3bea26af-ovn-node-metrics-cert\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830592 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-log-socket\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830634 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-sysctl-conf\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830661 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-tuned\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830677 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-cni-dir\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830695 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-kubelet-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830737 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f517242-13ab-4998-9d96-faab59766b3b-kubelet-dir\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830753 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-cnibin\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830785 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-kubelet\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830816 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ac5590a-ef07-4cda-8357-78aae27ac5e8-cnibin\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830859 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-kubelet\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830858 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-cni-bin\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830896 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830900 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-host-cni-bin\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830916 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60c99a96-5455-4303-ab66-b21a59d9c105-host\") pod \"node-ca-rcndk\" (UID: \"60c99a96-5455-4303-ab66-b21a59d9c105\") " pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830904 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a778e2cf-6292-41a8-a8e6-44ba43631c82-iptables-alerter-script\") pod \"iptables-alerter-w2hbj\" (UID: \"a778e2cf-6292-41a8-a8e6-44ba43631c82\") " pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830940 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/60c99a96-5455-4303-ab66-b21a59d9c105-serviceca\") pod \"node-ca-rcndk\" (UID: \"60c99a96-5455-4303-ab66-b21a59d9c105\") " pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:00.832229 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.830988 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-run-k8s-cni-cncf-io\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.832710 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.831018 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-var-lib-openvswitch\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832710 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.831043 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-run-ovn\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832710 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.831065 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5acc1851-6633-49b2-88c3-177e3bea26af-ovnkube-config\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832710 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.831076 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-var-lib-openvswitch\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832710 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.831089 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b2434af6-7e97-4039-9604-9310288bca08-konnectivity-ca\") pod \"konnectivity-agent-m8bsl\" (UID: \"b2434af6-7e97-4039-9604-9310288bca08\") " pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:00.832710 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.831120 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqcb2\" (UniqueName: \"kubernetes.io/projected/e4b2741d-b458-4ac7-8509-5475bd034c73-kube-api-access-jqcb2\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:00.832710 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.831122 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5acc1851-6633-49b2-88c3-177e3bea26af-run-ovn\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.832710 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.831545 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b2434af6-7e97-4039-9604-9310288bca08-konnectivity-ca\") pod \"konnectivity-agent-m8bsl\" (UID: \"b2434af6-7e97-4039-9604-9310288bca08\") " pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:00.832710 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.831759 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5acc1851-6633-49b2-88c3-177e3bea26af-ovnkube-config\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.834130 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.834107 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5acc1851-6633-49b2-88c3-177e3bea26af-ovn-node-metrics-cert\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.834217 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.834181 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b2434af6-7e97-4039-9604-9310288bca08-agent-certs\") pod \"konnectivity-agent-m8bsl\" (UID: \"b2434af6-7e97-4039-9604-9310288bca08\") " pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:00.839812 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:00.839789 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:39:00.839812 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:00.839812 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:39:00.839993 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:00.839822 2570 projected.go:194] Error preparing data for projected volume kube-api-access-d74kk for pod openshift-network-diagnostics/network-check-target-6hlfq: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:00.839993 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:00.839900 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk podName:6ae56213-c71d-4f84-b4f2-b7874b87ad3d nodeName:}" failed. No retries permitted until 2026-03-12 13:39:01.339865288 +0000 UTC m=+22.115328909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d74kk" (UniqueName: "kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk") pod "network-check-target-6hlfq" (UID: "6ae56213-c71d-4f84-b4f2-b7874b87ad3d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:00.842736 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.842607 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-74hvc\" (UniqueName: \"kubernetes.io/projected/0f517242-13ab-4998-9d96-faab59766b3b-kube-api-access-74hvc\") pod \"aws-ebs-csi-driver-node-mz9s4\" (UID: \"0f517242-13ab-4998-9d96-faab59766b3b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:00.843387 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.843357 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkswq\" (UniqueName: \"kubernetes.io/projected/a778e2cf-6292-41a8-a8e6-44ba43631c82-kube-api-access-rkswq\") pod \"iptables-alerter-w2hbj\" (UID: \"a778e2cf-6292-41a8-a8e6-44ba43631c82\") " pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:00.843480 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.843450 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf7sx\" (UniqueName: \"kubernetes.io/projected/7ac5590a-ef07-4cda-8357-78aae27ac5e8-kube-api-access-cf7sx\") pod \"multus-additional-cni-plugins-qbtlm\" (UID: \"7ac5590a-ef07-4cda-8357-78aae27ac5e8\") " pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:00.843698 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.843680 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb6jj\" (UniqueName: \"kubernetes.io/projected/5acc1851-6633-49b2-88c3-177e3bea26af-kube-api-access-lb6jj\") pod \"ovnkube-node-h9fnd\" (UID: \"5acc1851-6633-49b2-88c3-177e3bea26af\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:00.931867 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.931824 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-sysconfig\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.931867 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.931868 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-lib-modules\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.931895 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-run-multus-certs\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.931919 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-systemd\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.931939 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-sys\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.931939 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-sysconfig\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.931961 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-system-cni-dir\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.931988 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e2915633-eccd-4769-8960-a86012fad6da-tmp-dir\") pod \"node-resolver-z2zhm\" (UID: \"e2915633-eccd-4769-8960-a86012fad6da\") " pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932014 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-modprobe-d\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932040 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c00809b-e1dd-43f1-a58f-fc0a53b67729-tmp\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932043 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-systemd\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932013 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-run-multus-certs\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932064 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-run\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932101 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-lib-modules\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932053 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-sys\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932116 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-run\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932056 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-system-cni-dir\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932142 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-modprobe-d\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932106 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-hostroot\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.932435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932194 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-hostroot\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932199 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-daemon-config\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932236 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lr4th\" (UniqueName: \"kubernetes.io/projected/e2915633-eccd-4769-8960-a86012fad6da-kube-api-access-lr4th\") pod \"node-resolver-z2zhm\" (UID: \"e2915633-eccd-4769-8960-a86012fad6da\") " pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932260 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e2915633-eccd-4769-8960-a86012fad6da-hosts-file\") pod \"node-resolver-z2zhm\" (UID: \"e2915633-eccd-4769-8960-a86012fad6da\") " pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932299 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-72sbp\" (UniqueName: \"kubernetes.io/projected/0c00809b-e1dd-43f1-a58f-fc0a53b67729-kube-api-access-72sbp\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932322 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-conf-dir\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932361 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-run-netns\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932382 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e2915633-eccd-4769-8960-a86012fad6da-hosts-file\") pod \"node-resolver-z2zhm\" (UID: \"e2915633-eccd-4769-8960-a86012fad6da\") " pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932410 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-conf-dir\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932433 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-run-netns\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932417 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-sysctl-conf\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932477 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-tuned\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932502 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-cni-dir\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932524 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932541 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60c99a96-5455-4303-ab66-b21a59d9c105-host\") pod \"node-ca-rcndk\" (UID: \"60c99a96-5455-4303-ab66-b21a59d9c105\") " pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932542 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-sysctl-conf\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932565 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/60c99a96-5455-4303-ab66-b21a59d9c105-serviceca\") pod \"node-ca-rcndk\" (UID: \"60c99a96-5455-4303-ab66-b21a59d9c105\") " pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932594 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60c99a96-5455-4303-ab66-b21a59d9c105-host\") pod \"node-ca-rcndk\" (UID: \"60c99a96-5455-4303-ab66-b21a59d9c105\") " pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:00.932614 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:00.933245 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932671 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-cni-dir\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:00.932724 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs podName:e4b2741d-b458-4ac7-8509-5475bd034c73 nodeName:}" failed. No retries permitted until 2026-03-12 13:39:01.432704864 +0000 UTC m=+22.208168504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs") pod "network-metrics-daemon-md2rq" (UID: "e4b2741d-b458-4ac7-8509-5475bd034c73") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932748 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-run-k8s-cni-cncf-io\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932787 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jqcb2\" (UniqueName: \"kubernetes.io/projected/e4b2741d-b458-4ac7-8509-5475bd034c73-kube-api-access-jqcb2\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932814 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5bcj7\" (UniqueName: \"kubernetes.io/projected/60c99a96-5455-4303-ab66-b21a59d9c105-kube-api-access-5bcj7\") pod \"node-ca-rcndk\" (UID: \"60c99a96-5455-4303-ab66-b21a59d9c105\") " pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932836 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-socket-dir-parent\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932908 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-var-lib-cni-multus\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.932979 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-var-lib-kubelet\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933007 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e2915633-eccd-4769-8960-a86012fad6da-tmp-dir\") pod \"node-resolver-z2zhm\" (UID: \"e2915633-eccd-4769-8960-a86012fad6da\") " pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933019 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-kubernetes\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933048 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-os-release\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933091 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-cni-binary-copy\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933112 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-os-release\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933125 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-sysctl-d\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933234 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-sysctl-d\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933253 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-var-lib-cni-multus\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933293 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-var-lib-kubelet\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.934004 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933301 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-socket-dir-parent\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933162 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-run-k8s-cni-cncf-io\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933337 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-cnibin\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933347 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-kubernetes\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933365 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-var-lib-cni-bin\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933381 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-cnibin\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933400 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-var-lib-kubelet\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933432 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-etc-kubernetes\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933432 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-var-lib-cni-bin\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933472 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-host\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933489 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-host-var-lib-kubelet\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933499 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wkjq2\" (UniqueName: \"kubernetes.io/projected/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-kube-api-access-wkjq2\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933531 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-etc-kubernetes\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933546 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c00809b-e1dd-43f1-a58f-fc0a53b67729-host\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.933798 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/60c99a96-5455-4303-ab66-b21a59d9c105-serviceca\") pod \"node-ca-rcndk\" (UID: \"60c99a96-5455-4303-ab66-b21a59d9c105\") " pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.934011 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-multus-daemon-config\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.934511 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.934334 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-cni-binary-copy\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.935083 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.934868 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/0c00809b-e1dd-43f1-a58f-fc0a53b67729-etc-tuned\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.935083 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.934928 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c00809b-e1dd-43f1-a58f-fc0a53b67729-tmp\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.943173 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.943147 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkjq2\" (UniqueName: \"kubernetes.io/projected/9f2e052b-174e-48b3-b2f3-0ccb4fde2d95-kube-api-access-wkjq2\") pod \"multus-27rkj\" (UID: \"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95\") " pod="openshift-multus/multus-27rkj" Mar 12 13:39:00.948360 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.948321 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bcj7\" (UniqueName: \"kubernetes.io/projected/60c99a96-5455-4303-ab66-b21a59d9c105-kube-api-access-5bcj7\") pod \"node-ca-rcndk\" (UID: \"60c99a96-5455-4303-ab66-b21a59d9c105\") " pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:00.949292 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.949271 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr4th\" (UniqueName: \"kubernetes.io/projected/e2915633-eccd-4769-8960-a86012fad6da-kube-api-access-lr4th\") pod \"node-resolver-z2zhm\" (UID: \"e2915633-eccd-4769-8960-a86012fad6da\") " pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:00.949417 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.949399 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-72sbp\" (UniqueName: \"kubernetes.io/projected/0c00809b-e1dd-43f1-a58f-fc0a53b67729-kube-api-access-72sbp\") pod \"tuned-qztxh\" (UID: \"0c00809b-e1dd-43f1-a58f-fc0a53b67729\") " pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:00.953962 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:00.953919 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqcb2\" (UniqueName: \"kubernetes.io/projected/e4b2741d-b458-4ac7-8509-5475bd034c73-kube-api-access-jqcb2\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:01.020929 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.020886 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-w2hbj" Mar 12 13:39:01.027919 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:01.027893 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda778e2cf_6292_41a8_a8e6_44ba43631c82.slice/crio-844679b8e098c5505bc64d44bdacbaa2c9fe7545cbe3dea22a64fbd15e14207b WatchSource:0}: Error finding container 844679b8e098c5505bc64d44bdacbaa2c9fe7545cbe3dea22a64fbd15e14207b: Status 404 returned error can't find the container with id 844679b8e098c5505bc64d44bdacbaa2c9fe7545cbe3dea22a64fbd15e14207b Mar 12 13:39:01.031003 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.030984 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" Mar 12 13:39:01.037802 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.037771 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:01.038082 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:01.038043 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ac5590a_ef07_4cda_8357_78aae27ac5e8.slice/crio-40f7bf014d5cae7646ffbf09a61e447c125f9d991078a7be5ff2ad27551efc18 WatchSource:0}: Error finding container 40f7bf014d5cae7646ffbf09a61e447c125f9d991078a7be5ff2ad27551efc18: Status 404 returned error can't find the container with id 40f7bf014d5cae7646ffbf09a61e447c125f9d991078a7be5ff2ad27551efc18 Mar 12 13:39:01.041590 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.041567 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:01.046159 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.046123 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" Mar 12 13:39:01.046422 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:01.046211 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5acc1851_6633_49b2_88c3_177e3bea26af.slice/crio-7c3f43a41f3931496b73d4f98bdd92a789930060c8a87f17004861382474ad20 WatchSource:0}: Error finding container 7c3f43a41f3931496b73d4f98bdd92a789930060c8a87f17004861382474ad20: Status 404 returned error can't find the container with id 7c3f43a41f3931496b73d4f98bdd92a789930060c8a87f17004861382474ad20 Mar 12 13:39:01.048678 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:01.048654 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2434af6_7e97_4039_9604_9310288bca08.slice/crio-81f2c2ab2e073067cc01dd26a75a047909a2a7d55d2803f52ab9b980ba695e54 WatchSource:0}: Error finding container 81f2c2ab2e073067cc01dd26a75a047909a2a7d55d2803f52ab9b980ba695e54: Status 404 returned error can't find the container with id 81f2c2ab2e073067cc01dd26a75a047909a2a7d55d2803f52ab9b980ba695e54 Mar 12 13:39:01.052929 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.052911 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-27rkj" Mar 12 13:39:01.053784 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:01.053704 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f517242_13ab_4998_9d96_faab59766b3b.slice/crio-af898a4db63f531812c852d3550e05905cc46c538486798cc5e3816930687744 WatchSource:0}: Error finding container af898a4db63f531812c852d3550e05905cc46c538486798cc5e3816930687744: Status 404 returned error can't find the container with id af898a4db63f531812c852d3550e05905cc46c538486798cc5e3816930687744 Mar 12 13:39:01.057868 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.057839 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rcndk" Mar 12 13:39:01.059808 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:01.059787 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f2e052b_174e_48b3_b2f3_0ccb4fde2d95.slice/crio-a15a3b0f5915a0cf6c87031f667e18ec504ea4c3e71c90844a413c9d355748a3 WatchSource:0}: Error finding container a15a3b0f5915a0cf6c87031f667e18ec504ea4c3e71c90844a413c9d355748a3: Status 404 returned error can't find the container with id a15a3b0f5915a0cf6c87031f667e18ec504ea4c3e71c90844a413c9d355748a3 Mar 12 13:39:01.063738 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.063714 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-qztxh" Mar 12 13:39:01.066048 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:01.066020 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60c99a96_5455_4303_ab66_b21a59d9c105.slice/crio-621c58250430d0e61988df1f3ffe40807f5a6eaf09dd0d4a3e8991c96913bb91 WatchSource:0}: Error finding container 621c58250430d0e61988df1f3ffe40807f5a6eaf09dd0d4a3e8991c96913bb91: Status 404 returned error can't find the container with id 621c58250430d0e61988df1f3ffe40807f5a6eaf09dd0d4a3e8991c96913bb91 Mar 12 13:39:01.068407 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.068387 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z2zhm" Mar 12 13:39:01.072773 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:01.072746 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c00809b_e1dd_43f1_a58f_fc0a53b67729.slice/crio-77229523c6ede57e692fec0b7809830468c0914b1edec5d3f32c0ef09e49f132 WatchSource:0}: Error finding container 77229523c6ede57e692fec0b7809830468c0914b1edec5d3f32c0ef09e49f132: Status 404 returned error can't find the container with id 77229523c6ede57e692fec0b7809830468c0914b1edec5d3f32c0ef09e49f132 Mar 12 13:39:01.077114 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:01.077089 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2915633_eccd_4769_8960_a86012fad6da.slice/crio-6968acfd1ae41251ac31f97e4c1d90f44f81d012a74feef94851e6838d6558fa WatchSource:0}: Error finding container 6968acfd1ae41251ac31f97e4c1d90f44f81d012a74feef94851e6838d6558fa: Status 404 returned error can't find the container with id 6968acfd1ae41251ac31f97e4c1d90f44f81d012a74feef94851e6838d6558fa Mar 12 13:39:01.437897 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.437858 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d74kk\" (UniqueName: \"kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk\") pod \"network-check-target-6hlfq\" (UID: \"6ae56213-c71d-4f84-b4f2-b7874b87ad3d\") " pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:01.438061 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.437928 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:01.438061 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:01.438017 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:39:01.438061 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:01.438034 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:01.438061 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:01.438039 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:39:01.438061 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:01.438060 2570 projected.go:194] Error preparing data for projected volume kube-api-access-d74kk for pod openshift-network-diagnostics/network-check-target-6hlfq: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:01.438219 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:01.438102 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs podName:e4b2741d-b458-4ac7-8509-5475bd034c73 nodeName:}" failed. No retries permitted until 2026-03-12 13:39:02.43808701 +0000 UTC m=+23.213550635 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs") pod "network-metrics-daemon-md2rq" (UID: "e4b2741d-b458-4ac7-8509-5475bd034c73") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:01.438219 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:01.438116 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk podName:6ae56213-c71d-4f84-b4f2-b7874b87ad3d nodeName:}" failed. No retries permitted until 2026-03-12 13:39:02.438109212 +0000 UTC m=+23.213572837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d74kk" (UniqueName: "kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk") pod "network-check-target-6hlfq" (UID: "6ae56213-c71d-4f84-b4f2-b7874b87ad3d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:01.789672 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.789561 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-03-11 13:33:59 +0000 UTC" deadline="2027-12-29 07:07:54.90670495 +0000 UTC" Mar 12 13:39:01.789672 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.789603 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15761h28m53.11710505s" Mar 12 13:39:01.861815 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.861777 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:01.861996 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:01.861910 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:01.863962 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.863742 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:01.863962 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:01.863890 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:01.916934 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.912848 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" event={"ID":"7ac5590a-ef07-4cda-8357-78aae27ac5e8","Type":"ContainerStarted","Data":"40f7bf014d5cae7646ffbf09a61e447c125f9d991078a7be5ff2ad27551efc18"} Mar 12 13:39:01.916934 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.915277 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-w2hbj" event={"ID":"a778e2cf-6292-41a8-a8e6-44ba43631c82","Type":"ContainerStarted","Data":"844679b8e098c5505bc64d44bdacbaa2c9fe7545cbe3dea22a64fbd15e14207b"} Mar 12 13:39:01.920097 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.917692 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z2zhm" event={"ID":"e2915633-eccd-4769-8960-a86012fad6da","Type":"ContainerStarted","Data":"6968acfd1ae41251ac31f97e4c1d90f44f81d012a74feef94851e6838d6558fa"} Mar 12 13:39:01.920097 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.920049 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-qztxh" event={"ID":"0c00809b-e1dd-43f1-a58f-fc0a53b67729","Type":"ContainerStarted","Data":"77229523c6ede57e692fec0b7809830468c0914b1edec5d3f32c0ef09e49f132"} Mar 12 13:39:01.924052 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.924015 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" event={"ID":"0f517242-13ab-4998-9d96-faab59766b3b","Type":"ContainerStarted","Data":"af898a4db63f531812c852d3550e05905cc46c538486798cc5e3816930687744"} Mar 12 13:39:01.928790 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.928751 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rcndk" event={"ID":"60c99a96-5455-4303-ab66-b21a59d9c105","Type":"ContainerStarted","Data":"621c58250430d0e61988df1f3ffe40807f5a6eaf09dd0d4a3e8991c96913bb91"} Mar 12 13:39:01.941464 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.941423 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27rkj" event={"ID":"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95","Type":"ContainerStarted","Data":"a15a3b0f5915a0cf6c87031f667e18ec504ea4c3e71c90844a413c9d355748a3"} Mar 12 13:39:01.945593 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.945554 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-m8bsl" event={"ID":"b2434af6-7e97-4039-9604-9310288bca08","Type":"ContainerStarted","Data":"81f2c2ab2e073067cc01dd26a75a047909a2a7d55d2803f52ab9b980ba695e54"} Mar 12 13:39:01.956000 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:01.955963 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" event={"ID":"5acc1851-6633-49b2-88c3-177e3bea26af","Type":"ContainerStarted","Data":"7c3f43a41f3931496b73d4f98bdd92a789930060c8a87f17004861382474ad20"} Mar 12 13:39:02.448462 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:02.447903 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:02.448462 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:02.447990 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d74kk\" (UniqueName: \"kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk\") pod \"network-check-target-6hlfq\" (UID: \"6ae56213-c71d-4f84-b4f2-b7874b87ad3d\") " pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:02.448462 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:02.448070 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:02.448462 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:02.448125 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:39:02.448462 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:02.448144 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:39:02.448462 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:02.448159 2570 projected.go:194] Error preparing data for projected volume kube-api-access-d74kk for pod openshift-network-diagnostics/network-check-target-6hlfq: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:02.448462 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:02.448144 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs podName:e4b2741d-b458-4ac7-8509-5475bd034c73 nodeName:}" failed. No retries permitted until 2026-03-12 13:39:04.448124591 +0000 UTC m=+25.223588217 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs") pod "network-metrics-daemon-md2rq" (UID: "e4b2741d-b458-4ac7-8509-5475bd034c73") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:02.448462 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:02.448225 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk podName:6ae56213-c71d-4f84-b4f2-b7874b87ad3d nodeName:}" failed. No retries permitted until 2026-03-12 13:39:04.448208483 +0000 UTC m=+25.223672117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d74kk" (UniqueName: "kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk") pod "network-check-target-6hlfq" (UID: "6ae56213-c71d-4f84-b4f2-b7874b87ad3d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:03.861137 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:03.861101 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:03.861605 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:03.861145 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:03.861605 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:03.861256 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:03.861605 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:03.861406 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:04.465758 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:04.465717 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d74kk\" (UniqueName: \"kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk\") pod \"network-check-target-6hlfq\" (UID: \"6ae56213-c71d-4f84-b4f2-b7874b87ad3d\") " pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:04.465955 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:04.465790 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:04.465955 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:04.465920 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:04.465955 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:04.465930 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:39:04.465955 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:04.465950 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:39:04.465955 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:04.465960 2570 projected.go:194] Error preparing data for projected volume kube-api-access-d74kk for pod openshift-network-diagnostics/network-check-target-6hlfq: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:04.466205 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:04.465984 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs podName:e4b2741d-b458-4ac7-8509-5475bd034c73 nodeName:}" failed. No retries permitted until 2026-03-12 13:39:08.465964779 +0000 UTC m=+29.241428411 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs") pod "network-metrics-daemon-md2rq" (UID: "e4b2741d-b458-4ac7-8509-5475bd034c73") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:04.466205 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:04.466003 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk podName:6ae56213-c71d-4f84-b4f2-b7874b87ad3d nodeName:}" failed. No retries permitted until 2026-03-12 13:39:08.465991035 +0000 UTC m=+29.241454660 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d74kk" (UniqueName: "kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk") pod "network-check-target-6hlfq" (UID: "6ae56213-c71d-4f84-b4f2-b7874b87ad3d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:05.861813 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:05.861770 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:05.862373 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:05.861900 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:05.862373 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:05.862348 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:05.862498 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:05.862448 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:07.861636 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:07.861583 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:07.862076 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:07.861720 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:07.864099 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:07.863936 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:07.864099 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:07.864052 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:08.501878 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.501845 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d74kk\" (UniqueName: \"kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk\") pod \"network-check-target-6hlfq\" (UID: \"6ae56213-c71d-4f84-b4f2-b7874b87ad3d\") " pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:08.502132 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.501906 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:08.502132 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:08.502036 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:08.502132 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:08.502105 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs podName:e4b2741d-b458-4ac7-8509-5475bd034c73 nodeName:}" failed. No retries permitted until 2026-03-12 13:39:16.502085548 +0000 UTC m=+37.277549183 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs") pod "network-metrics-daemon-md2rq" (UID: "e4b2741d-b458-4ac7-8509-5475bd034c73") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:08.502539 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:08.502517 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:39:08.502539 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:08.502543 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:39:08.502539 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:08.502556 2570 projected.go:194] Error preparing data for projected volume kube-api-access-d74kk for pod openshift-network-diagnostics/network-check-target-6hlfq: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:08.502843 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:08.502602 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk podName:6ae56213-c71d-4f84-b4f2-b7874b87ad3d nodeName:}" failed. No retries permitted until 2026-03-12 13:39:16.5025868 +0000 UTC m=+37.278050421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d74kk" (UniqueName: "kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk") pod "network-check-target-6hlfq" (UID: "6ae56213-c71d-4f84-b4f2-b7874b87ad3d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:08.881641 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.881592 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-ttmnc"] Mar 12 13:39:08.887379 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.887351 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:08.889726 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.889576 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Mar 12 13:39:08.890644 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.890593 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Mar 12 13:39:08.890884 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.890864 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-cf95r\"" Mar 12 13:39:08.891079 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.891057 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Mar 12 13:39:08.891337 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.891323 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Mar 12 13:39:08.891501 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.891479 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Mar 12 13:39:08.891665 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:08.891652 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Mar 12 13:39:09.007363 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.006924 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-textfile\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.007363 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.006972 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-wtmp\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.007363 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.007027 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-metrics-client-ca\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.007363 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.007061 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-accelerators-collector-config\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.007363 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.007088 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glnct\" (UniqueName: \"kubernetes.io/projected/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-kube-api-access-glnct\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.007363 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.007153 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-root\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.007363 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.007177 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.007363 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.007237 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-tls\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.007363 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.007265 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-sys\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.107982 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.107938 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-root\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108155 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.107996 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108155 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108058 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-root\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108155 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108075 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-tls\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108155 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108130 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-sys\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108381 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108160 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-textfile\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108381 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108187 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-wtmp\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108381 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108222 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-metrics-client-ca\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108381 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108250 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-accelerators-collector-config\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108381 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108280 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-glnct\" (UniqueName: \"kubernetes.io/projected/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-kube-api-access-glnct\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108596 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108539 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-wtmp\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108678 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108598 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-sys\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.108944 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.108920 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-textfile\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.109024 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.109006 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-metrics-client-ca\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.109216 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.109191 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-accelerators-collector-config\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.113393 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.113365 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.120477 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.120412 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-node-exporter-tls\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.123864 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.123810 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-glnct\" (UniqueName: \"kubernetes.io/projected/22f063c9-1f02-4784-95c6-b1d60a5bc9cb-kube-api-access-glnct\") pod \"node-exporter-ttmnc\" (UID: \"22f063c9-1f02-4784-95c6-b1d60a5bc9cb\") " pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.201306 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.201221 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-ttmnc" Mar 12 13:39:09.862666 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.862614 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:09.862832 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:09.862747 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:09.863128 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:09.863111 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:09.863191 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:09.863178 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:11.861292 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:11.861250 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:11.861892 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:11.861401 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:11.861892 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:11.861412 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:11.861892 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:11.861735 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:11.861892 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:11.861874 2570 scope.go:117] "RemoveContainer" containerID="e77f11cc724be8a876826c4f490d73ef5288a80560406784962d9bd0814a9dd4" Mar 12 13:39:11.862096 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:11.862075 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_openshift-machine-config-operator(0e4e8f3d30bf75c22161da0d94e78eb7)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" podUID="0e4e8f3d30bf75c22161da0d94e78eb7" Mar 12 13:39:12.552713 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:12.552677 2570 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Mar 12 13:39:13.861088 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:13.861050 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:13.861514 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:13.861050 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:13.861514 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:13.861203 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:13.861514 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:13.861237 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:15.861199 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:15.861163 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:15.861653 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:15.861165 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:15.861653 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:15.861319 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:15.861653 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:15.861390 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:16.565335 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:16.565281 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d74kk\" (UniqueName: \"kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk\") pod \"network-check-target-6hlfq\" (UID: \"6ae56213-c71d-4f84-b4f2-b7874b87ad3d\") " pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:16.565546 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:16.565368 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:16.565546 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:16.565478 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:16.565546 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:16.565482 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:39:16.565546 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:16.565510 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:39:16.565546 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:16.565525 2570 projected.go:194] Error preparing data for projected volume kube-api-access-d74kk for pod openshift-network-diagnostics/network-check-target-6hlfq: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:16.565546 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:16.565548 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs podName:e4b2741d-b458-4ac7-8509-5475bd034c73 nodeName:}" failed. No retries permitted until 2026-03-12 13:39:32.565527916 +0000 UTC m=+53.340991537 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs") pod "network-metrics-daemon-md2rq" (UID: "e4b2741d-b458-4ac7-8509-5475bd034c73") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:39:16.565874 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:16.565582 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk podName:6ae56213-c71d-4f84-b4f2-b7874b87ad3d nodeName:}" failed. No retries permitted until 2026-03-12 13:39:32.565570266 +0000 UTC m=+53.341033887 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d74kk" (UniqueName: "kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk") pod "network-check-target-6hlfq" (UID: "6ae56213-c71d-4f84-b4f2-b7874b87ad3d") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:39:17.861305 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:17.861267 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:17.861305 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:17.861305 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:17.861820 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:17.861413 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:17.861820 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:17.861640 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:18.079363 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:18.079323 2570 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Mar 12 13:39:18.990558 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:18.990285 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ttmnc" event={"ID":"22f063c9-1f02-4784-95c6-b1d60a5bc9cb","Type":"ContainerStarted","Data":"8eaeeebc161b2a7a7c4551ed708db2a2e4135cd39380345ec6667eb3d6699f5d"} Mar 12 13:39:19.002343 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:19.000184 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-qztxh" event={"ID":"0c00809b-e1dd-43f1-a58f-fc0a53b67729","Type":"ContainerStarted","Data":"5cefd168416073241c26b9551227dfc5e607907817508d41d40d88c3ed527dcb"} Mar 12 13:39:19.009692 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:19.008828 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" event={"ID":"0f517242-13ab-4998-9d96-faab59766b3b","Type":"ContainerStarted","Data":"def7646a978688214a68d84d03dcc347a50aee418a3bd7edde3061ff3e6a36c1"} Mar 12 13:39:19.012646 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:19.012501 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27rkj" event={"ID":"9f2e052b-174e-48b3-b2f3-0ccb4fde2d95","Type":"ContainerStarted","Data":"555717c00f07441a16e203591ceab9cb03269ed6946eafc86b9d68761eacbc41"} Mar 12 13:39:19.016666 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:19.016574 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-qztxh" podStartSLOduration=2.782510067 podStartE2EDuration="20.016560329s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:01.074727306 +0000 UTC m=+21.850190933" lastFinishedPulling="2026-03-12 13:39:18.308777557 +0000 UTC m=+39.084241195" observedRunningTime="2026-03-12 13:39:19.015888654 +0000 UTC m=+39.791352300" watchObservedRunningTime="2026-03-12 13:39:19.016560329 +0000 UTC m=+39.792023972" Mar 12 13:39:19.031825 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:19.031749 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-m8bsl" podStartSLOduration=2.773407579 podStartE2EDuration="20.031730045s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:01.050486826 +0000 UTC m=+21.825950446" lastFinishedPulling="2026-03-12 13:39:18.308809286 +0000 UTC m=+39.084272912" observedRunningTime="2026-03-12 13:39:19.031244698 +0000 UTC m=+39.806708342" watchObservedRunningTime="2026-03-12 13:39:19.031730045 +0000 UTC m=+39.807193689" Mar 12 13:39:19.056805 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:19.056750 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-27rkj" podStartSLOduration=2.428146677 podStartE2EDuration="20.056727761s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:01.0615891 +0000 UTC m=+21.837052721" lastFinishedPulling="2026-03-12 13:39:18.690170181 +0000 UTC m=+39.465633805" observedRunningTime="2026-03-12 13:39:19.056270018 +0000 UTC m=+39.831733661" watchObservedRunningTime="2026-03-12 13:39:19.056727761 +0000 UTC m=+39.832191405" Mar 12 13:39:19.862152 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:19.861965 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:19.862322 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:19.862037 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:19.862322 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:19.862230 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:19.862322 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:19.862306 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:20.032937 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.032896 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" event={"ID":"5acc1851-6633-49b2-88c3-177e3bea26af","Type":"ContainerStarted","Data":"85b00dbee4942db77e915ba10a030eae10a994f4ad6d1af837c373542e41f7aa"} Mar 12 13:39:20.033494 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.032947 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" event={"ID":"5acc1851-6633-49b2-88c3-177e3bea26af","Type":"ContainerStarted","Data":"dfeca888757c6ce6a871df5d34b96dd8b0f5781ba72e7553b100d14e0a65e800"} Mar 12 13:39:20.033494 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.032960 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" event={"ID":"5acc1851-6633-49b2-88c3-177e3bea26af","Type":"ContainerStarted","Data":"8a4d4ccf06a0d893b97e8b3ead7efa432616b2c3ad42d4f423b00a61a9bfca41"} Mar 12 13:39:20.033494 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.032973 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" event={"ID":"5acc1851-6633-49b2-88c3-177e3bea26af","Type":"ContainerStarted","Data":"71072a5cc5499d0a479d9e303539b3a797f918c6aeb7c9b5a31c466f924fd695"} Mar 12 13:39:20.033494 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.033001 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" event={"ID":"5acc1851-6633-49b2-88c3-177e3bea26af","Type":"ContainerStarted","Data":"cbe4332209dff66d82f0b6f28206fdf88dbdcd66317129f5f458d7cb11fbeea9"} Mar 12 13:39:20.033494 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.033014 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" event={"ID":"5acc1851-6633-49b2-88c3-177e3bea26af","Type":"ContainerStarted","Data":"23ab81c9830c67c4cbe2753f039a2df9439ff24e85e2d241c4c33eed417b9490"} Mar 12 13:39:20.034368 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.034342 2570 generic.go:358] "Generic (PLEG): container finished" podID="7ac5590a-ef07-4cda-8357-78aae27ac5e8" containerID="53d2d2d4957c84132718ebd0220b35d8995da5b8830896c3c2aa88bcbc8c3380" exitCode=0 Mar 12 13:39:20.034481 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.034426 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" event={"ID":"7ac5590a-ef07-4cda-8357-78aae27ac5e8","Type":"ContainerDied","Data":"53d2d2d4957c84132718ebd0220b35d8995da5b8830896c3c2aa88bcbc8c3380"} Mar 12 13:39:20.036304 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.036278 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-w2hbj" event={"ID":"a778e2cf-6292-41a8-a8e6-44ba43631c82","Type":"ContainerStarted","Data":"60a9cf87a1fa808b2573ca3420a7997f9c9266db26dc7e3f04f6b5b2219365bb"} Mar 12 13:39:20.037956 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.037920 2570 generic.go:358] "Generic (PLEG): container finished" podID="22f063c9-1f02-4784-95c6-b1d60a5bc9cb" containerID="6c9f775876771939a45416bdbbb62d2ea43c765873de83a3824cf6f84d9043f2" exitCode=0 Mar 12 13:39:20.038225 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.038196 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ttmnc" event={"ID":"22f063c9-1f02-4784-95c6-b1d60a5bc9cb","Type":"ContainerDied","Data":"6c9f775876771939a45416bdbbb62d2ea43c765873de83a3824cf6f84d9043f2"} Mar 12 13:39:20.040232 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.040208 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z2zhm" event={"ID":"e2915633-eccd-4769-8960-a86012fad6da","Type":"ContainerStarted","Data":"0c7d3a8d55e03e91c3466504953f71a40e210223371192729671a808e89b67cf"} Mar 12 13:39:20.043864 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.043840 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rcndk" event={"ID":"60c99a96-5455-4303-ab66-b21a59d9c105","Type":"ContainerStarted","Data":"b2f9c08ef401d9e9e034befaf199c4ae63f5546a8e6cac7425cdd6dcd4f38dd6"} Mar 12 13:39:20.045700 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.045673 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-m8bsl" event={"ID":"b2434af6-7e97-4039-9604-9310288bca08","Type":"ContainerStarted","Data":"b35ab2c46f923718047f71ed897a08e039f86797ea62cfee765f00e5e57c5c1c"} Mar 12 13:39:20.048786 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.048765 2570 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Mar 12 13:39:20.110723 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.110660 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rcndk" podStartSLOduration=3.5281707989999997 podStartE2EDuration="21.110611087s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:01.068539405 +0000 UTC m=+21.844003030" lastFinishedPulling="2026-03-12 13:39:18.650979687 +0000 UTC m=+39.426443318" observedRunningTime="2026-03-12 13:39:20.110440333 +0000 UTC m=+40.885903976" watchObservedRunningTime="2026-03-12 13:39:20.110611087 +0000 UTC m=+40.886074732" Mar 12 13:39:20.125151 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.125103 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-z2zhm" podStartSLOduration=3.55245031 podStartE2EDuration="21.12508669s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:01.078660869 +0000 UTC m=+21.854124490" lastFinishedPulling="2026-03-12 13:39:18.651297243 +0000 UTC m=+39.426760870" observedRunningTime="2026-03-12 13:39:20.125054575 +0000 UTC m=+40.900518248" watchObservedRunningTime="2026-03-12 13:39:20.12508669 +0000 UTC m=+40.900550332" Mar 12 13:39:20.800714 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.800588 2570 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-03-12T13:39:20.048785199Z","UUID":"466a17e9-4d0c-4083-b74e-014594df52e1","Handler":null,"Name":"","Endpoint":""} Mar 12 13:39:20.803032 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.802579 2570 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Mar 12 13:39:20.803032 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:20.802609 2570 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Mar 12 13:39:21.042441 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.042389 2570 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:21.043080 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.042990 2570 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:21.050402 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.050366 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ttmnc" event={"ID":"22f063c9-1f02-4784-95c6-b1d60a5bc9cb","Type":"ContainerStarted","Data":"28d3c10027d413302cbac35bb9b3b1c450e266ac07684d62d5794f62fbdc5c12"} Mar 12 13:39:21.050402 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.050406 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ttmnc" event={"ID":"22f063c9-1f02-4784-95c6-b1d60a5bc9cb","Type":"ContainerStarted","Data":"0cb53679982a93b1ac5b9a0a58f5bc9122820fadf2aee21ff1cd01dee5866c70"} Mar 12 13:39:21.052405 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.052331 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" event={"ID":"0f517242-13ab-4998-9d96-faab59766b3b","Type":"ContainerStarted","Data":"80074d01212f4b9ccf1ed716d0e562dd7dbd2be1daf0e378607315291911fc67"} Mar 12 13:39:21.052405 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.052370 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" event={"ID":"0f517242-13ab-4998-9d96-faab59766b3b","Type":"ContainerStarted","Data":"a4d5c843e432eba192dd85150a99dd3af4867f976624ea9551290de492c895c1"} Mar 12 13:39:21.053403 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.053384 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:21.053666 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.053650 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-m8bsl" Mar 12 13:39:21.060680 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.060634 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-w2hbj" podStartSLOduration=4.781379678 podStartE2EDuration="22.060601068s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:01.029579289 +0000 UTC m=+21.805042912" lastFinishedPulling="2026-03-12 13:39:18.30880068 +0000 UTC m=+39.084264302" observedRunningTime="2026-03-12 13:39:20.140314854 +0000 UTC m=+40.915778507" watchObservedRunningTime="2026-03-12 13:39:21.060601068 +0000 UTC m=+41.836064715" Mar 12 13:39:21.127802 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.127727 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-ttmnc" podStartSLOduration=12.193432266 podStartE2EDuration="13.127708546s" podCreationTimestamp="2026-03-12 13:39:08 +0000 UTC" firstStartedPulling="2026-03-12 13:39:18.672434944 +0000 UTC m=+39.447898572" lastFinishedPulling="2026-03-12 13:39:19.606711217 +0000 UTC m=+40.382174852" observedRunningTime="2026-03-12 13:39:21.103373827 +0000 UTC m=+41.878837472" watchObservedRunningTime="2026-03-12 13:39:21.127708546 +0000 UTC m=+41.903172189" Mar 12 13:39:21.127994 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.127900 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-mz9s4" podStartSLOduration=2.275465027 podStartE2EDuration="22.127893737s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:01.056166261 +0000 UTC m=+21.831629888" lastFinishedPulling="2026-03-12 13:39:20.90859496 +0000 UTC m=+41.684058598" observedRunningTime="2026-03-12 13:39:21.127204212 +0000 UTC m=+41.902667857" watchObservedRunningTime="2026-03-12 13:39:21.127893737 +0000 UTC m=+41.903357400" Mar 12 13:39:21.861568 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.861532 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:21.861756 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:21.861691 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:21.861827 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:21.861748 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:21.861884 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:21.861858 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:22.057542 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:22.057495 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" event={"ID":"5acc1851-6633-49b2-88c3-177e3bea26af","Type":"ContainerStarted","Data":"5840120f649a314f9a3987f568e280984e4eb66f35153ce9cae7d891387ea9c5"} Mar 12 13:39:22.862219 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:22.862184 2570 scope.go:117] "RemoveContainer" containerID="e77f11cc724be8a876826c4f490d73ef5288a80560406784962d9bd0814a9dd4" Mar 12 13:39:23.861573 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:23.861307 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:23.862068 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:23.861354 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:23.862068 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:23.861680 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:23.862068 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:23.861738 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:25.065365 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.065204 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:39:25.065903 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.065726 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" event={"ID":"0e4e8f3d30bf75c22161da0d94e78eb7","Type":"ContainerStarted","Data":"72bfab531c52c95531363b1efcd881ad0bc971fc708f658b48069d67f0b9cf4c"} Mar 12 13:39:25.068791 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.068760 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" event={"ID":"5acc1851-6633-49b2-88c3-177e3bea26af","Type":"ContainerStarted","Data":"a7bb3a1c9bf7852faf2ddec4d8c030216a5a90d3a8bf642b9b3ccafa7a194064"} Mar 12 13:39:25.069063 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.069046 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:25.069144 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.069069 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:25.070575 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.070551 2570 generic.go:358] "Generic (PLEG): container finished" podID="7ac5590a-ef07-4cda-8357-78aae27ac5e8" containerID="2c3f5ba689cd5e9c2a474a26537ad7675a699060a9b92b9ffeb18828c44b2cc3" exitCode=0 Mar 12 13:39:25.070711 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.070596 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" event={"ID":"7ac5590a-ef07-4cda-8357-78aae27ac5e8","Type":"ContainerDied","Data":"2c3f5ba689cd5e9c2a474a26537ad7675a699060a9b92b9ffeb18828c44b2cc3"} Mar 12 13:39:25.082101 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.082051 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal" podStartSLOduration=25.082025509 podStartE2EDuration="25.082025509s" podCreationTimestamp="2026-03-12 13:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:39:25.081316429 +0000 UTC m=+45.856780075" watchObservedRunningTime="2026-03-12 13:39:25.082025509 +0000 UTC m=+45.857489153" Mar 12 13:39:25.085860 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.085834 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:25.126970 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.126915 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" podStartSLOduration=8.277715021 podStartE2EDuration="26.126899097s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:01.048005445 +0000 UTC m=+21.823469079" lastFinishedPulling="2026-03-12 13:39:18.89718952 +0000 UTC m=+39.672653155" observedRunningTime="2026-03-12 13:39:25.126419842 +0000 UTC m=+45.901883494" watchObservedRunningTime="2026-03-12 13:39:25.126899097 +0000 UTC m=+45.902362740" Mar 12 13:39:25.861916 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.861885 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:25.862150 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:25.862011 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:25.862150 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:25.862105 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:25.862265 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:25.862243 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:26.073298 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:26.073270 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:26.090109 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:26.090076 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:39:26.240937 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:26.240604 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-6hlfq"] Mar 12 13:39:26.240937 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:26.240847 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:26.241287 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:26.241252 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:26.245806 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:26.245771 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-md2rq"] Mar 12 13:39:26.245968 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:26.245917 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:26.246065 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:26.246029 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:27.076560 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:27.076528 2570 generic.go:358] "Generic (PLEG): container finished" podID="7ac5590a-ef07-4cda-8357-78aae27ac5e8" containerID="a60d0ba2e45f32dfa749eb385822eb57ef100030495faa9553c193fcd5c3c97e" exitCode=0 Mar 12 13:39:27.077160 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:27.076643 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" event={"ID":"7ac5590a-ef07-4cda-8357-78aae27ac5e8","Type":"ContainerDied","Data":"a60d0ba2e45f32dfa749eb385822eb57ef100030495faa9553c193fcd5c3c97e"} Mar 12 13:39:27.862022 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:27.861966 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:27.862218 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:27.861989 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:27.862218 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:27.862166 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:27.862367 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:27.862341 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:29.083193 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:29.083006 2570 generic.go:358] "Generic (PLEG): container finished" podID="7ac5590a-ef07-4cda-8357-78aae27ac5e8" containerID="e43c1261d95d3883062f873752945d24990f939b402eee5b5d93930aeb377aba" exitCode=0 Mar 12 13:39:29.083584 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:29.083090 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" event={"ID":"7ac5590a-ef07-4cda-8357-78aae27ac5e8","Type":"ContainerDied","Data":"e43c1261d95d3883062f873752945d24990f939b402eee5b5d93930aeb377aba"} Mar 12 13:39:29.862711 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:29.862678 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:29.862925 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:29.862769 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-6hlfq" podUID="6ae56213-c71d-4f84-b4f2-b7874b87ad3d" Mar 12 13:39:29.862925 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:29.862808 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:29.862925 ip-10-0-142-111 kubenswrapper[2570]: E0312 13:39:29.862864 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-md2rq" podUID="e4b2741d-b458-4ac7-8509-5475bd034c73" Mar 12 13:39:31.490604 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.490563 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-111.ec2.internal" event="NodeReady" Mar 12 13:39:31.491056 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.490758 2570 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Mar 12 13:39:31.556696 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.556381 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-d88pt"] Mar 12 13:39:31.596185 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.596143 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xzph6"] Mar 12 13:39:31.596537 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.596512 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d88pt" Mar 12 13:39:31.599810 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.599785 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Mar 12 13:39:31.599962 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.599784 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Mar 12 13:39:31.599962 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.599785 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Mar 12 13:39:31.600928 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.600734 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-6llcj\"" Mar 12 13:39:31.625952 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.625918 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d88pt"] Mar 12 13:39:31.625952 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.625949 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xzph6"] Mar 12 13:39:31.626181 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.626120 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.628779 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.628754 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Mar 12 13:39:31.629104 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.629082 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Mar 12 13:39:31.629335 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.629088 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-4fw8j\"" Mar 12 13:39:31.659540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.659506 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-gz92z"] Mar 12 13:39:31.681671 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.681610 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-gz92z"] Mar 12 13:39:31.681863 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.681832 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.682968 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.682940 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfqwx\" (UniqueName: \"kubernetes.io/projected/a9dd6e04-4e16-4606-8ccd-f7892664a9fa-kube-api-access-vfqwx\") pod \"ingress-canary-d88pt\" (UID: \"a9dd6e04-4e16-4606-8ccd-f7892664a9fa\") " pod="openshift-ingress-canary/ingress-canary-d88pt" Mar 12 13:39:31.683216 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.683004 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9dd6e04-4e16-4606-8ccd-f7892664a9fa-cert\") pod \"ingress-canary-d88pt\" (UID: \"a9dd6e04-4e16-4606-8ccd-f7892664a9fa\") " pod="openshift-ingress-canary/ingress-canary-d88pt" Mar 12 13:39:31.685866 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.685841 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-4sgvv\"" Mar 12 13:39:31.686014 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.685842 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Mar 12 13:39:31.686290 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.686271 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Mar 12 13:39:31.686498 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.686481 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Mar 12 13:39:31.686588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.686495 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Mar 12 13:39:31.783807 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.783770 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/65fec5eb-81fa-453b-bdc2-6972e50122f8-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.783994 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.783825 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9dd6e04-4e16-4606-8ccd-f7892664a9fa-cert\") pod \"ingress-canary-d88pt\" (UID: \"a9dd6e04-4e16-4606-8ccd-f7892664a9fa\") " pod="openshift-ingress-canary/ingress-canary-d88pt" Mar 12 13:39:31.783994 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.783966 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-metrics-tls\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.784094 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.784004 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-tmp-dir\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.784094 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.784027 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xrh\" (UniqueName: \"kubernetes.io/projected/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-kube-api-access-67xrh\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.784094 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.784059 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfqwx\" (UniqueName: \"kubernetes.io/projected/a9dd6e04-4e16-4606-8ccd-f7892664a9fa-kube-api-access-vfqwx\") pod \"ingress-canary-d88pt\" (UID: \"a9dd6e04-4e16-4606-8ccd-f7892664a9fa\") " pod="openshift-ingress-canary/ingress-canary-d88pt" Mar 12 13:39:31.784094 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.784087 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/65fec5eb-81fa-453b-bdc2-6972e50122f8-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.784261 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.784113 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-config-volume\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.784261 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.784176 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/65fec5eb-81fa-453b-bdc2-6972e50122f8-crio-socket\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.784261 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.784204 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/65fec5eb-81fa-453b-bdc2-6972e50122f8-data-volume\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.784261 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.784229 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4g7\" (UniqueName: \"kubernetes.io/projected/65fec5eb-81fa-453b-bdc2-6972e50122f8-kube-api-access-jx4g7\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.788927 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.788903 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9dd6e04-4e16-4606-8ccd-f7892664a9fa-cert\") pod \"ingress-canary-d88pt\" (UID: \"a9dd6e04-4e16-4606-8ccd-f7892664a9fa\") " pod="openshift-ingress-canary/ingress-canary-d88pt" Mar 12 13:39:31.793256 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.793220 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfqwx\" (UniqueName: \"kubernetes.io/projected/a9dd6e04-4e16-4606-8ccd-f7892664a9fa-kube-api-access-vfqwx\") pod \"ingress-canary-d88pt\" (UID: \"a9dd6e04-4e16-4606-8ccd-f7892664a9fa\") " pod="openshift-ingress-canary/ingress-canary-d88pt" Mar 12 13:39:31.862103 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.862007 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:31.862549 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.862304 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:31.867370 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.867259 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Mar 12 13:39:31.867540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.867405 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Mar 12 13:39:31.867774 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.867760 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Mar 12 13:39:31.868485 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.868464 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-wbttq\"" Mar 12 13:39:31.868585 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.868503 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-sj9kz\"" Mar 12 13:39:31.884901 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.884873 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-metrics-tls\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.884901 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.884904 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-tmp-dir\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.885116 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.884927 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-67xrh\" (UniqueName: \"kubernetes.io/projected/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-kube-api-access-67xrh\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.885116 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.884953 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/65fec5eb-81fa-453b-bdc2-6972e50122f8-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.885297 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.885267 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-config-volume\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.885370 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.885316 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-tmp-dir\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.885370 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.885328 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/65fec5eb-81fa-453b-bdc2-6972e50122f8-crio-socket\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.885370 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.885355 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/65fec5eb-81fa-453b-bdc2-6972e50122f8-data-volume\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.885470 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.885388 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jx4g7\" (UniqueName: \"kubernetes.io/projected/65fec5eb-81fa-453b-bdc2-6972e50122f8-kube-api-access-jx4g7\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.885470 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.885419 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/65fec5eb-81fa-453b-bdc2-6972e50122f8-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.885580 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.885553 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/65fec5eb-81fa-453b-bdc2-6972e50122f8-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.885645 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.885587 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/65fec5eb-81fa-453b-bdc2-6972e50122f8-crio-socket\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.885787 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.885769 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/65fec5eb-81fa-453b-bdc2-6972e50122f8-data-volume\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.885852 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.885817 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-config-volume\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.888081 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.888054 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-metrics-tls\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.888717 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.888693 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/65fec5eb-81fa-453b-bdc2-6972e50122f8-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.897977 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.897941 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xrh\" (UniqueName: \"kubernetes.io/projected/ec4cf77c-b3ee-4a56-a3b4-73324af3351d-kube-api-access-67xrh\") pod \"dns-default-xzph6\" (UID: \"ec4cf77c-b3ee-4a56-a3b4-73324af3351d\") " pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.905898 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.905862 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx4g7\" (UniqueName: \"kubernetes.io/projected/65fec5eb-81fa-453b-bdc2-6972e50122f8-kube-api-access-jx4g7\") pod \"insights-runtime-extractor-gz92z\" (UID: \"65fec5eb-81fa-453b-bdc2-6972e50122f8\") " pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:31.907671 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.907646 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d88pt" Mar 12 13:39:31.937205 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.937173 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:31.995251 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:31.994807 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-gz92z" Mar 12 13:39:32.093425 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:32.093394 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d88pt"] Mar 12 13:39:32.100463 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:32.100426 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9dd6e04_4e16_4606_8ccd_f7892664a9fa.slice/crio-1f709cf19ae898ab8641165f09c63595182951529bcf342ccee420fd5738c39b WatchSource:0}: Error finding container 1f709cf19ae898ab8641165f09c63595182951529bcf342ccee420fd5738c39b: Status 404 returned error can't find the container with id 1f709cf19ae898ab8641165f09c63595182951529bcf342ccee420fd5738c39b Mar 12 13:39:32.109240 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:32.109213 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xzph6"] Mar 12 13:39:32.113530 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:32.113460 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec4cf77c_b3ee_4a56_a3b4_73324af3351d.slice/crio-28ec4182582dbcb70b4d6e7129c003a3c77c19ba40b3e4321fbf380a4bc3bece WatchSource:0}: Error finding container 28ec4182582dbcb70b4d6e7129c003a3c77c19ba40b3e4321fbf380a4bc3bece: Status 404 returned error can't find the container with id 28ec4182582dbcb70b4d6e7129c003a3c77c19ba40b3e4321fbf380a4bc3bece Mar 12 13:39:32.185155 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:32.185117 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-gz92z"] Mar 12 13:39:32.189004 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:32.188971 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65fec5eb_81fa_453b_bdc2_6972e50122f8.slice/crio-403dc091df5180668956cb3234df72b32641da5e89c65dd3829d9bb269bde656 WatchSource:0}: Error finding container 403dc091df5180668956cb3234df72b32641da5e89c65dd3829d9bb269bde656: Status 404 returned error can't find the container with id 403dc091df5180668956cb3234df72b32641da5e89c65dd3829d9bb269bde656 Mar 12 13:39:32.593318 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:32.593115 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d74kk\" (UniqueName: \"kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk\") pod \"network-check-target-6hlfq\" (UID: \"6ae56213-c71d-4f84-b4f2-b7874b87ad3d\") " pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:32.593318 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:32.593325 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:32.596478 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:32.596452 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4b2741d-b458-4ac7-8509-5475bd034c73-metrics-certs\") pod \"network-metrics-daemon-md2rq\" (UID: \"e4b2741d-b458-4ac7-8509-5475bd034c73\") " pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:32.596764 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:32.596744 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74kk\" (UniqueName: \"kubernetes.io/projected/6ae56213-c71d-4f84-b4f2-b7874b87ad3d-kube-api-access-d74kk\") pod \"network-check-target-6hlfq\" (UID: \"6ae56213-c71d-4f84-b4f2-b7874b87ad3d\") " pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:32.774986 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:32.774936 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:32.781798 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:32.781764 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-md2rq" Mar 12 13:39:33.095572 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:33.095532 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-gz92z" event={"ID":"65fec5eb-81fa-453b-bdc2-6972e50122f8","Type":"ContainerStarted","Data":"7f93c82292ed8ce836be69bf2eedbd02978d400c764ea292962689ab5f67f45d"} Mar 12 13:39:33.095572 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:33.095579 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-gz92z" event={"ID":"65fec5eb-81fa-453b-bdc2-6972e50122f8","Type":"ContainerStarted","Data":"403dc091df5180668956cb3234df72b32641da5e89c65dd3829d9bb269bde656"} Mar 12 13:39:33.097437 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:33.097405 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d88pt" event={"ID":"a9dd6e04-4e16-4606-8ccd-f7892664a9fa","Type":"ContainerStarted","Data":"1f709cf19ae898ab8641165f09c63595182951529bcf342ccee420fd5738c39b"} Mar 12 13:39:33.098576 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:33.098534 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xzph6" event={"ID":"ec4cf77c-b3ee-4a56-a3b4-73324af3351d","Type":"ContainerStarted","Data":"28ec4182582dbcb70b4d6e7129c003a3c77c19ba40b3e4321fbf380a4bc3bece"} Mar 12 13:39:36.848538 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:36.848276 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-md2rq"] Mar 12 13:39:36.854853 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:36.854749 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4b2741d_b458_4ac7_8509_5475bd034c73.slice/crio-493fc2bb562541abaa8a82f4850eb2d484bc28905fec2d86a1c868a94f0610b6 WatchSource:0}: Error finding container 493fc2bb562541abaa8a82f4850eb2d484bc28905fec2d86a1c868a94f0610b6: Status 404 returned error can't find the container with id 493fc2bb562541abaa8a82f4850eb2d484bc28905fec2d86a1c868a94f0610b6 Mar 12 13:39:36.859805 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:36.859779 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-6hlfq"] Mar 12 13:39:36.868608 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:39:36.868564 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae56213_c71d_4f84_b4f2_b7874b87ad3d.slice/crio-3c59e698003d9fa5b53b7afe00cdb59a3aebfdb4ce4c4e6dc77d3914ab078412 WatchSource:0}: Error finding container 3c59e698003d9fa5b53b7afe00cdb59a3aebfdb4ce4c4e6dc77d3914ab078412: Status 404 returned error can't find the container with id 3c59e698003d9fa5b53b7afe00cdb59a3aebfdb4ce4c4e6dc77d3914ab078412 Mar 12 13:39:37.117130 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:37.117097 2570 generic.go:358] "Generic (PLEG): container finished" podID="7ac5590a-ef07-4cda-8357-78aae27ac5e8" containerID="6a3b327650374c52ca10c245e08254a0d958abecb9d823659f74c593d39b655b" exitCode=0 Mar 12 13:39:37.117292 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:37.117185 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" event={"ID":"7ac5590a-ef07-4cda-8357-78aae27ac5e8","Type":"ContainerDied","Data":"6a3b327650374c52ca10c245e08254a0d958abecb9d823659f74c593d39b655b"} Mar 12 13:39:37.119396 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:37.119368 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d88pt" event={"ID":"a9dd6e04-4e16-4606-8ccd-f7892664a9fa","Type":"ContainerStarted","Data":"5f1e274534aad5e733e6edd86236ec059f6cf765a7b71b0bbf8407a0d9490390"} Mar 12 13:39:37.120970 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:37.120937 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-6hlfq" event={"ID":"6ae56213-c71d-4f84-b4f2-b7874b87ad3d","Type":"ContainerStarted","Data":"3c59e698003d9fa5b53b7afe00cdb59a3aebfdb4ce4c4e6dc77d3914ab078412"} Mar 12 13:39:37.121971 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:37.121946 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-md2rq" event={"ID":"e4b2741d-b458-4ac7-8509-5475bd034c73","Type":"ContainerStarted","Data":"493fc2bb562541abaa8a82f4850eb2d484bc28905fec2d86a1c868a94f0610b6"} Mar 12 13:39:37.123263 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:37.123229 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xzph6" event={"ID":"ec4cf77c-b3ee-4a56-a3b4-73324af3351d","Type":"ContainerStarted","Data":"0c9206a3b38b1352f796090262832955919920ad81b0f009549283434837a73d"} Mar 12 13:39:38.130845 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:38.130803 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xzph6" event={"ID":"ec4cf77c-b3ee-4a56-a3b4-73324af3351d","Type":"ContainerStarted","Data":"92e28befd7cb142b09e08e06b2ea482b5f1173cbe008318f608679d006473af5"} Mar 12 13:39:38.131375 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:38.131065 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:38.132709 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:38.132674 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-gz92z" event={"ID":"65fec5eb-81fa-453b-bdc2-6972e50122f8","Type":"ContainerStarted","Data":"8dbe3f3e14dc312cebf943c72ffd94d4589b6c2554fd54ce2d48e4cfd3315c23"} Mar 12 13:39:38.136665 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:38.136609 2570 generic.go:358] "Generic (PLEG): container finished" podID="7ac5590a-ef07-4cda-8357-78aae27ac5e8" containerID="347f15222d43a62f21a4b710250678c5918e1bbe57bae7f92e81d917f3c810c6" exitCode=0 Mar 12 13:39:38.137573 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:38.137551 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" event={"ID":"7ac5590a-ef07-4cda-8357-78aae27ac5e8","Type":"ContainerDied","Data":"347f15222d43a62f21a4b710250678c5918e1bbe57bae7f92e81d917f3c810c6"} Mar 12 13:39:38.148507 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:38.148448 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xzph6" podStartSLOduration=2.611438476 podStartE2EDuration="7.148431576s" podCreationTimestamp="2026-03-12 13:39:31 +0000 UTC" firstStartedPulling="2026-03-12 13:39:32.115973542 +0000 UTC m=+52.891437195" lastFinishedPulling="2026-03-12 13:39:36.652966657 +0000 UTC m=+57.428430295" observedRunningTime="2026-03-12 13:39:38.148172774 +0000 UTC m=+58.923636417" watchObservedRunningTime="2026-03-12 13:39:38.148431576 +0000 UTC m=+58.923895220" Mar 12 13:39:38.149184 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:38.149153 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-d88pt" podStartSLOduration=2.600254792 podStartE2EDuration="7.149142152s" podCreationTimestamp="2026-03-12 13:39:31 +0000 UTC" firstStartedPulling="2026-03-12 13:39:32.105335344 +0000 UTC m=+52.880798983" lastFinishedPulling="2026-03-12 13:39:36.654222713 +0000 UTC m=+57.429686343" observedRunningTime="2026-03-12 13:39:37.15696745 +0000 UTC m=+57.932431094" watchObservedRunningTime="2026-03-12 13:39:38.149142152 +0000 UTC m=+58.924605821" Mar 12 13:39:39.147616 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:39.147583 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" event={"ID":"7ac5590a-ef07-4cda-8357-78aae27ac5e8","Type":"ContainerStarted","Data":"bb5860c3eeaa61c6705b4a8ddefcc06495f0e6617669e7a04c27a404f0b9c210"} Mar 12 13:39:39.150955 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:39.150923 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-md2rq" event={"ID":"e4b2741d-b458-4ac7-8509-5475bd034c73","Type":"ContainerStarted","Data":"ec2d4e3fcb4245c00e8f973d37ef32528f38b7e1f7050c3b2b13d67442c4eca4"} Mar 12 13:39:39.151098 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:39.150962 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-md2rq" event={"ID":"e4b2741d-b458-4ac7-8509-5475bd034c73","Type":"ContainerStarted","Data":"9f73b2db5557b02a307673186a1eb9dae25f68dd053a2ae51863489a366e85c5"} Mar 12 13:39:39.170949 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:39.170772 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qbtlm" podStartSLOduration=4.559961661 podStartE2EDuration="40.17075083s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:01.042178952 +0000 UTC m=+21.817642575" lastFinishedPulling="2026-03-12 13:39:36.652968122 +0000 UTC m=+57.428431744" observedRunningTime="2026-03-12 13:39:39.16868286 +0000 UTC m=+59.944146504" watchObservedRunningTime="2026-03-12 13:39:39.17075083 +0000 UTC m=+59.946214473" Mar 12 13:39:40.170190 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:40.170126 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-md2rq" podStartSLOduration=39.288166139 podStartE2EDuration="41.170104587s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:36.858744918 +0000 UTC m=+57.634208546" lastFinishedPulling="2026-03-12 13:39:38.74068336 +0000 UTC m=+59.516146994" observedRunningTime="2026-03-12 13:39:40.168729594 +0000 UTC m=+60.944193261" watchObservedRunningTime="2026-03-12 13:39:40.170104587 +0000 UTC m=+60.945568259" Mar 12 13:39:42.160113 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:42.160069 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-gz92z" event={"ID":"65fec5eb-81fa-453b-bdc2-6972e50122f8","Type":"ContainerStarted","Data":"58e5ddf27429e2f9fc25d010adddbc4fc15d16ad6509b578da29b66c321c7269"} Mar 12 13:39:42.161450 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:42.161423 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-6hlfq" event={"ID":"6ae56213-c71d-4f84-b4f2-b7874b87ad3d","Type":"ContainerStarted","Data":"b5c70ab67c14dbb7f92db43512bc0d8d668599109eee71a2a11a272e98a85bbc"} Mar 12 13:39:42.161591 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:42.161544 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:39:42.187799 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:42.187746 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-gz92z" podStartSLOduration=2.147472755 podStartE2EDuration="11.187731223s" podCreationTimestamp="2026-03-12 13:39:31 +0000 UTC" firstStartedPulling="2026-03-12 13:39:32.322907722 +0000 UTC m=+53.098371354" lastFinishedPulling="2026-03-12 13:39:41.363166187 +0000 UTC m=+62.138629822" observedRunningTime="2026-03-12 13:39:42.186433746 +0000 UTC m=+62.961897399" watchObservedRunningTime="2026-03-12 13:39:42.187731223 +0000 UTC m=+62.963194865" Mar 12 13:39:42.213542 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:42.213492 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-6hlfq" podStartSLOduration=38.712536463 podStartE2EDuration="43.213477279s" podCreationTimestamp="2026-03-12 13:38:59 +0000 UTC" firstStartedPulling="2026-03-12 13:39:36.871809164 +0000 UTC m=+57.647272789" lastFinishedPulling="2026-03-12 13:39:41.372749968 +0000 UTC m=+62.148213605" observedRunningTime="2026-03-12 13:39:42.212816777 +0000 UTC m=+62.988280430" watchObservedRunningTime="2026-03-12 13:39:42.213477279 +0000 UTC m=+62.988940922" Mar 12 13:39:46.318563 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:46.318537 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-d88pt_a9dd6e04-4e16-4606-8ccd-f7892664a9fa/serve-healthcheck-canary/0.log" Mar 12 13:39:48.153862 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:48.153824 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xzph6" Mar 12 13:39:58.100465 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:39:58.100303 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h9fnd" Mar 12 13:40:13.167362 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:13.167330 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-6hlfq" Mar 12 13:40:31.014514 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.014481 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 13:40:31.017074 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.017056 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.019037 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.019007 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Mar 12 13:40:31.019037 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.019007 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Mar 12 13:40:31.019232 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.019054 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Mar 12 13:40:31.019348 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.019332 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Mar 12 13:40:31.019408 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.019389 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Mar 12 13:40:31.019464 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.019409 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-55rp7\"" Mar 12 13:40:31.019464 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.019434 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Mar 12 13:40:31.019464 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.019449 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Mar 12 13:40:31.019588 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.019464 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Mar 12 13:40:31.024556 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.024521 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Mar 12 13:40:31.033893 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.033865 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 13:40:31.088855 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.088814 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.088855 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.088856 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3f31c767-6746-4585-8144-952def904ca1-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089142 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.088880 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089142 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.088900 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3f31c767-6746-4585-8144-952def904ca1-config-out\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089142 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.088919 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3f31c767-6746-4585-8144-952def904ca1-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089142 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.088993 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-config-volume\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089142 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.089030 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-web-config\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089142 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.089068 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089142 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.089099 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkdr\" (UniqueName: \"kubernetes.io/projected/3f31c767-6746-4585-8144-952def904ca1-kube-api-access-pfkdr\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089142 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.089129 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f31c767-6746-4585-8144-952def904ca1-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089423 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.089168 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3f31c767-6746-4585-8144-952def904ca1-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089423 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.089196 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.089423 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.089257 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190176 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190145 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190354 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190185 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3f31c767-6746-4585-8144-952def904ca1-config-out\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190354 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190205 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3f31c767-6746-4585-8144-952def904ca1-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190354 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190231 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-config-volume\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190354 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190254 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-web-config\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190354 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190314 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190597 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190362 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkdr\" (UniqueName: \"kubernetes.io/projected/3f31c767-6746-4585-8144-952def904ca1-kube-api-access-pfkdr\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190597 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190391 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f31c767-6746-4585-8144-952def904ca1-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190597 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190425 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3f31c767-6746-4585-8144-952def904ca1-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190812 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190785 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190881 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190840 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190931 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190886 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.190931 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.190922 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3f31c767-6746-4585-8144-952def904ca1-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.191093 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.191067 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3f31c767-6746-4585-8144-952def904ca1-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.192638 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.192591 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3f31c767-6746-4585-8144-952def904ca1-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.193567 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.193535 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f31c767-6746-4585-8144-952def904ca1-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.195540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.195481 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-config-volume\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.195540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.195509 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3f31c767-6746-4585-8144-952def904ca1-config-out\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.195851 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.195654 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.195851 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.195716 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.195851 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.195769 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.195851 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.195849 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.196064 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.195870 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3f31c767-6746-4585-8144-952def904ca1-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.196220 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.196200 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-web-config\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.197240 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.197217 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/3f31c767-6746-4585-8144-952def904ca1-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.199454 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.199428 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkdr\" (UniqueName: \"kubernetes.io/projected/3f31c767-6746-4585-8144-952def904ca1-kube-api-access-pfkdr\") pod \"alertmanager-main-0\" (UID: \"3f31c767-6746-4585-8144-952def904ca1\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.327544 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.327509 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 13:40:31.466686 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:31.466651 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 13:40:31.470574 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:40:31.470542 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f31c767_6746_4585_8144_952def904ca1.slice/crio-2e15935f116df56eaa514718d2b815590fd3b46ac82509051deaf3e852c02add WatchSource:0}: Error finding container 2e15935f116df56eaa514718d2b815590fd3b46ac82509051deaf3e852c02add: Status 404 returned error can't find the container with id 2e15935f116df56eaa514718d2b815590fd3b46ac82509051deaf3e852c02add Mar 12 13:40:32.292915 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:32.292877 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3f31c767-6746-4585-8144-952def904ca1","Type":"ContainerStarted","Data":"2e15935f116df56eaa514718d2b815590fd3b46ac82509051deaf3e852c02add"} Mar 12 13:40:33.296862 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:33.296821 2570 generic.go:358] "Generic (PLEG): container finished" podID="3f31c767-6746-4585-8144-952def904ca1" containerID="67e2108001bb3aadb585c1ffde1cd7e9ee77a8a40839ba21d89c9fd722c41b81" exitCode=0 Mar 12 13:40:33.297303 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:33.296905 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3f31c767-6746-4585-8144-952def904ca1","Type":"ContainerDied","Data":"67e2108001bb3aadb585c1ffde1cd7e9ee77a8a40839ba21d89c9fd722c41b81"} Mar 12 13:40:35.304828 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:35.304727 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3f31c767-6746-4585-8144-952def904ca1","Type":"ContainerStarted","Data":"b965875d46ff7c0eaff242d25d52d574bfde40ed92da6d7a8ea73ecfcc5fdf5b"} Mar 12 13:40:35.304828 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:35.304768 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3f31c767-6746-4585-8144-952def904ca1","Type":"ContainerStarted","Data":"efe30997ccad6562de1cdae86b923731dacd3e9b16573c808972f1274ba091ae"} Mar 12 13:40:35.304828 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:35.304782 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3f31c767-6746-4585-8144-952def904ca1","Type":"ContainerStarted","Data":"e99a2e82c14776813d8b0d8da69d6c7aee561cbb5e99ff8e01751b530187e0f7"} Mar 12 13:40:35.304828 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:35.304792 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3f31c767-6746-4585-8144-952def904ca1","Type":"ContainerStarted","Data":"cc6000b19c634f7b2d9348e7cacbfb22f1c7f5993bdc5282ce0287f086c11077"} Mar 12 13:40:35.304828 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:35.304804 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3f31c767-6746-4585-8144-952def904ca1","Type":"ContainerStarted","Data":"393f2e04124b6f210bc6033ba3350daaea318fef35ab1f085b46bf6e4c7ffeb5"} Mar 12 13:40:36.310742 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:36.310697 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3f31c767-6746-4585-8144-952def904ca1","Type":"ContainerStarted","Data":"e4844eb7ee314a1e3990f9710b64121970672356d9a7603e0b3bc3da682f7643"} Mar 12 13:40:36.338648 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:40:36.338559 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=1.9077645680000002 podStartE2EDuration="6.338540503s" podCreationTimestamp="2026-03-12 13:40:30 +0000 UTC" firstStartedPulling="2026-03-12 13:40:31.472905662 +0000 UTC m=+112.248369286" lastFinishedPulling="2026-03-12 13:40:35.9036816 +0000 UTC m=+116.679145221" observedRunningTime="2026-03-12 13:40:36.337206424 +0000 UTC m=+117.112670081" watchObservedRunningTime="2026-03-12 13:40:36.338540503 +0000 UTC m=+117.114004150" Mar 12 13:43:39.731380 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:43:39.731349 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:43:39.731889 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:43:39.731416 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:43:39.733462 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:43:39.733444 2570 kubelet.go:1628] "Image garbage collection succeeded" Mar 12 13:45:41.205364 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.205326 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-p7x55"] Mar 12 13:45:41.207520 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.207501 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.210169 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.210147 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Mar 12 13:45:41.222031 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.221992 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-p7x55"] Mar 12 13:45:41.331665 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.331603 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/41d27fe4-a974-4c33-830e-130ef3c09adb-original-pull-secret\") pod \"global-pull-secret-syncer-p7x55\" (UID: \"41d27fe4-a974-4c33-830e-130ef3c09adb\") " pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.331861 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.331690 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/41d27fe4-a974-4c33-830e-130ef3c09adb-kubelet-config\") pod \"global-pull-secret-syncer-p7x55\" (UID: \"41d27fe4-a974-4c33-830e-130ef3c09adb\") " pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.331861 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.331724 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/41d27fe4-a974-4c33-830e-130ef3c09adb-dbus\") pod \"global-pull-secret-syncer-p7x55\" (UID: \"41d27fe4-a974-4c33-830e-130ef3c09adb\") " pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.432495 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.432461 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/41d27fe4-a974-4c33-830e-130ef3c09adb-kubelet-config\") pod \"global-pull-secret-syncer-p7x55\" (UID: \"41d27fe4-a974-4c33-830e-130ef3c09adb\") " pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.432591 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.432518 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/41d27fe4-a974-4c33-830e-130ef3c09adb-dbus\") pod \"global-pull-secret-syncer-p7x55\" (UID: \"41d27fe4-a974-4c33-830e-130ef3c09adb\") " pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.432591 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.432570 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/41d27fe4-a974-4c33-830e-130ef3c09adb-original-pull-secret\") pod \"global-pull-secret-syncer-p7x55\" (UID: \"41d27fe4-a974-4c33-830e-130ef3c09adb\") " pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.432705 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.432597 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/41d27fe4-a974-4c33-830e-130ef3c09adb-kubelet-config\") pod \"global-pull-secret-syncer-p7x55\" (UID: \"41d27fe4-a974-4c33-830e-130ef3c09adb\") " pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.432770 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.432751 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/41d27fe4-a974-4c33-830e-130ef3c09adb-dbus\") pod \"global-pull-secret-syncer-p7x55\" (UID: \"41d27fe4-a974-4c33-830e-130ef3c09adb\") " pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.435067 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.435051 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/41d27fe4-a974-4c33-830e-130ef3c09adb-original-pull-secret\") pod \"global-pull-secret-syncer-p7x55\" (UID: \"41d27fe4-a974-4c33-830e-130ef3c09adb\") " pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.516528 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.516438 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p7x55" Mar 12 13:45:41.642014 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.641977 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-p7x55"] Mar 12 13:45:41.645969 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:45:41.645933 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41d27fe4_a974_4c33_830e_130ef3c09adb.slice/crio-923c6b6165141784032190a61398f3c32ba16a571270b57196aff7d43bcabec6 WatchSource:0}: Error finding container 923c6b6165141784032190a61398f3c32ba16a571270b57196aff7d43bcabec6: Status 404 returned error can't find the container with id 923c6b6165141784032190a61398f3c32ba16a571270b57196aff7d43bcabec6 Mar 12 13:45:41.647585 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:41.647568 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:45:42.085250 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:42.085205 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-p7x55" event={"ID":"41d27fe4-a974-4c33-830e-130ef3c09adb","Type":"ContainerStarted","Data":"923c6b6165141784032190a61398f3c32ba16a571270b57196aff7d43bcabec6"} Mar 12 13:45:46.097929 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:46.097892 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-p7x55" event={"ID":"41d27fe4-a974-4c33-830e-130ef3c09adb","Type":"ContainerStarted","Data":"07bdb35fcab8e5042a2c8c9eb1424e8ac9d77a5cfa453fd87689037e29a9698c"} Mar 12 13:45:46.113285 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:45:46.113222 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-p7x55" podStartSLOduration=1.138704762 podStartE2EDuration="5.113201654s" podCreationTimestamp="2026-03-12 13:45:41 +0000 UTC" firstStartedPulling="2026-03-12 13:45:41.647721328 +0000 UTC m=+422.423184950" lastFinishedPulling="2026-03-12 13:45:45.622218218 +0000 UTC m=+426.397681842" observedRunningTime="2026-03-12 13:45:46.112768524 +0000 UTC m=+426.888232167" watchObservedRunningTime="2026-03-12 13:45:46.113201654 +0000 UTC m=+426.888665297" Mar 12 13:46:19.592156 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.592078 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c"] Mar 12 13:46:19.595356 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.595340 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:19.598265 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.598243 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Mar 12 13:46:19.598710 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.598692 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-98zp2\"" Mar 12 13:46:19.598803 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.598695 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Mar 12 13:46:19.612327 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.612306 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c"] Mar 12 13:46:19.698333 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.698298 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xgf\" (UniqueName: \"kubernetes.io/projected/ff212bae-80c8-4d3b-b1e3-a58ba601e983-kube-api-access-29xgf\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:19.698333 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.698343 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:19.698568 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.698372 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:19.799163 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.799118 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-29xgf\" (UniqueName: \"kubernetes.io/projected/ff212bae-80c8-4d3b-b1e3-a58ba601e983-kube-api-access-29xgf\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:19.799163 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.799167 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:19.799364 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.799203 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:19.799677 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.799656 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:19.799677 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.799669 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:19.808505 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.808476 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xgf\" (UniqueName: \"kubernetes.io/projected/ff212bae-80c8-4d3b-b1e3-a58ba601e983-kube-api-access-29xgf\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:19.904832 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:19.904742 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:20.036834 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:20.036800 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c"] Mar 12 13:46:20.042349 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:46:20.042321 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff212bae_80c8_4d3b_b1e3_a58ba601e983.slice/crio-7e7ab6fc8f75db73e094fa5bb42dcce6bf854d20895aafef0cf8591d7443a2bb WatchSource:0}: Error finding container 7e7ab6fc8f75db73e094fa5bb42dcce6bf854d20895aafef0cf8591d7443a2bb: Status 404 returned error can't find the container with id 7e7ab6fc8f75db73e094fa5bb42dcce6bf854d20895aafef0cf8591d7443a2bb Mar 12 13:46:20.191852 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:20.191768 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" event={"ID":"ff212bae-80c8-4d3b-b1e3-a58ba601e983","Type":"ContainerStarted","Data":"7e7ab6fc8f75db73e094fa5bb42dcce6bf854d20895aafef0cf8591d7443a2bb"} Mar 12 13:46:25.207562 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:25.207526 2570 generic.go:358] "Generic (PLEG): container finished" podID="ff212bae-80c8-4d3b-b1e3-a58ba601e983" containerID="ae9b748f2fb898ec6952684f13afe45a21baf39bf373ce5f1b04413b2a1b5603" exitCode=0 Mar 12 13:46:25.208021 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:25.207577 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" event={"ID":"ff212bae-80c8-4d3b-b1e3-a58ba601e983","Type":"ContainerDied","Data":"ae9b748f2fb898ec6952684f13afe45a21baf39bf373ce5f1b04413b2a1b5603"} Mar 12 13:46:27.214844 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:27.214801 2570 generic.go:358] "Generic (PLEG): container finished" podID="ff212bae-80c8-4d3b-b1e3-a58ba601e983" containerID="25f787147c5f525085c80fbd375f1f5f5076d19eedf67aa824625200f56f8352" exitCode=0 Mar 12 13:46:27.215232 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:27.214870 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" event={"ID":"ff212bae-80c8-4d3b-b1e3-a58ba601e983","Type":"ContainerDied","Data":"25f787147c5f525085c80fbd375f1f5f5076d19eedf67aa824625200f56f8352"} Mar 12 13:46:33.234574 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:33.234538 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" event={"ID":"ff212bae-80c8-4d3b-b1e3-a58ba601e983","Type":"ContainerStarted","Data":"14743944cf089c0b9c2938ad04f06f246c21dcdfbc7031d97d62f619c1201fde"} Mar 12 13:46:33.267192 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:33.267130 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" podStartSLOduration=1.226515377 podStartE2EDuration="14.267113897s" podCreationTimestamp="2026-03-12 13:46:19 +0000 UTC" firstStartedPulling="2026-03-12 13:46:20.04415517 +0000 UTC m=+460.819618791" lastFinishedPulling="2026-03-12 13:46:33.084753683 +0000 UTC m=+473.860217311" observedRunningTime="2026-03-12 13:46:33.264732607 +0000 UTC m=+474.040196250" watchObservedRunningTime="2026-03-12 13:46:33.267113897 +0000 UTC m=+474.042577540" Mar 12 13:46:34.239794 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:34.239757 2570 generic.go:358] "Generic (PLEG): container finished" podID="ff212bae-80c8-4d3b-b1e3-a58ba601e983" containerID="14743944cf089c0b9c2938ad04f06f246c21dcdfbc7031d97d62f619c1201fde" exitCode=0 Mar 12 13:46:34.240183 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:34.239837 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" event={"ID":"ff212bae-80c8-4d3b-b1e3-a58ba601e983","Type":"ContainerDied","Data":"14743944cf089c0b9c2938ad04f06f246c21dcdfbc7031d97d62f619c1201fde"} Mar 12 13:46:35.368486 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:35.368460 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:35.422679 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:35.422611 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29xgf\" (UniqueName: \"kubernetes.io/projected/ff212bae-80c8-4d3b-b1e3-a58ba601e983-kube-api-access-29xgf\") pod \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " Mar 12 13:46:35.422679 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:35.422672 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-util\") pod \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " Mar 12 13:46:35.422881 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:35.422704 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-bundle\") pod \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\" (UID: \"ff212bae-80c8-4d3b-b1e3-a58ba601e983\") " Mar 12 13:46:35.423324 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:35.423254 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-bundle" (OuterVolumeSpecName: "bundle") pod "ff212bae-80c8-4d3b-b1e3-a58ba601e983" (UID: "ff212bae-80c8-4d3b-b1e3-a58ba601e983"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 12 13:46:35.425106 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:35.425078 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff212bae-80c8-4d3b-b1e3-a58ba601e983-kube-api-access-29xgf" (OuterVolumeSpecName: "kube-api-access-29xgf") pod "ff212bae-80c8-4d3b-b1e3-a58ba601e983" (UID: "ff212bae-80c8-4d3b-b1e3-a58ba601e983"). InnerVolumeSpecName "kube-api-access-29xgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 13:46:35.427862 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:35.427812 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-util" (OuterVolumeSpecName: "util") pod "ff212bae-80c8-4d3b-b1e3-a58ba601e983" (UID: "ff212bae-80c8-4d3b-b1e3-a58ba601e983"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 12 13:46:35.523558 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:35.523450 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-29xgf\" (UniqueName: \"kubernetes.io/projected/ff212bae-80c8-4d3b-b1e3-a58ba601e983-kube-api-access-29xgf\") on node \"ip-10-0-142-111.ec2.internal\" DevicePath \"\"" Mar 12 13:46:35.523558 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:35.523500 2570 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-util\") on node \"ip-10-0-142-111.ec2.internal\" DevicePath \"\"" Mar 12 13:46:35.523558 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:35.523511 2570 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff212bae-80c8-4d3b-b1e3-a58ba601e983-bundle\") on node \"ip-10-0-142-111.ec2.internal\" DevicePath \"\"" Mar 12 13:46:36.247645 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:36.247540 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" event={"ID":"ff212bae-80c8-4d3b-b1e3-a58ba601e983","Type":"ContainerDied","Data":"7e7ab6fc8f75db73e094fa5bb42dcce6bf854d20895aafef0cf8591d7443a2bb"} Mar 12 13:46:36.247645 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:36.247571 2570 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e7ab6fc8f75db73e094fa5bb42dcce6bf854d20895aafef0cf8591d7443a2bb" Mar 12 13:46:36.247645 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:36.247597 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56z84c" Mar 12 13:46:42.492124 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.492092 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q"] Mar 12 13:46:42.492540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.492343 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ff212bae-80c8-4d3b-b1e3-a58ba601e983" containerName="util" Mar 12 13:46:42.492540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.492355 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff212bae-80c8-4d3b-b1e3-a58ba601e983" containerName="util" Mar 12 13:46:42.492540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.492369 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ff212bae-80c8-4d3b-b1e3-a58ba601e983" containerName="extract" Mar 12 13:46:42.492540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.492374 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff212bae-80c8-4d3b-b1e3-a58ba601e983" containerName="extract" Mar 12 13:46:42.492540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.492383 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ff212bae-80c8-4d3b-b1e3-a58ba601e983" containerName="pull" Mar 12 13:46:42.492540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.492389 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff212bae-80c8-4d3b-b1e3-a58ba601e983" containerName="pull" Mar 12 13:46:42.492540 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.492443 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="ff212bae-80c8-4d3b-b1e3-a58ba601e983" containerName="extract" Mar 12 13:46:42.534443 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.534395 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q"] Mar 12 13:46:42.534615 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.534538 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" Mar 12 13:46:42.537289 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.537251 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Mar 12 13:46:42.537428 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.537307 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Mar 12 13:46:42.537428 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.537322 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-zm9nr\"" Mar 12 13:46:42.571039 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.571001 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4gp6q\" (UID: \"44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" Mar 12 13:46:42.571225 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.571054 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f7kz\" (UniqueName: \"kubernetes.io/projected/44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0-kube-api-access-6f7kz\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4gp6q\" (UID: \"44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" Mar 12 13:46:42.671804 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.671762 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4gp6q\" (UID: \"44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" Mar 12 13:46:42.671992 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.671825 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6f7kz\" (UniqueName: \"kubernetes.io/projected/44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0-kube-api-access-6f7kz\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4gp6q\" (UID: \"44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" Mar 12 13:46:42.672174 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.672152 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4gp6q\" (UID: \"44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" Mar 12 13:46:42.684940 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.684903 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f7kz\" (UniqueName: \"kubernetes.io/projected/44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0-kube-api-access-6f7kz\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-4gp6q\" (UID: \"44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" Mar 12 13:46:42.844720 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.844680 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" Mar 12 13:46:42.979596 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:42.979560 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q"] Mar 12 13:46:42.983099 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:46:42.983061 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44bf3b0c_1fb2_40a4_bc96_7ab3daa5ffb0.slice/crio-b615ac2812c51ae241cba523191d92fab54a816ef389a334c26443ca2c63a13c WatchSource:0}: Error finding container b615ac2812c51ae241cba523191d92fab54a816ef389a334c26443ca2c63a13c: Status 404 returned error can't find the container with id b615ac2812c51ae241cba523191d92fab54a816ef389a334c26443ca2c63a13c Mar 12 13:46:43.269458 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:43.269371 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" event={"ID":"44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0","Type":"ContainerStarted","Data":"b615ac2812c51ae241cba523191d92fab54a816ef389a334c26443ca2c63a13c"} Mar 12 13:46:46.279704 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:46.279658 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" event={"ID":"44bf3b0c-1fb2-40a4-bc96-7ab3daa5ffb0","Type":"ContainerStarted","Data":"346cf0ebcf658a255ecdd89e1ce6e4fe1f5b9840e8b37112cd3452ac2bf764c3"} Mar 12 13:46:46.300918 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:46.300868 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-4gp6q" podStartSLOduration=1.993483779 podStartE2EDuration="4.300853497s" podCreationTimestamp="2026-03-12 13:46:42 +0000 UTC" firstStartedPulling="2026-03-12 13:46:42.985562639 +0000 UTC m=+483.761026260" lastFinishedPulling="2026-03-12 13:46:45.29293234 +0000 UTC m=+486.068395978" observedRunningTime="2026-03-12 13:46:46.299461522 +0000 UTC m=+487.074925165" watchObservedRunningTime="2026-03-12 13:46:46.300853497 +0000 UTC m=+487.076317139" Mar 12 13:46:49.439700 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.439653 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-2fj9h"] Mar 12 13:46:49.444353 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.444326 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" Mar 12 13:46:49.453158 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.453133 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Mar 12 13:46:49.453342 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.453310 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-l26nc\"" Mar 12 13:46:49.453342 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.453306 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Mar 12 13:46:49.476868 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.476834 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-2fj9h"] Mar 12 13:46:49.522804 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.522756 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8211e3e-c606-4fdf-b45f-ff256f6cd9f4-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-2fj9h\" (UID: \"e8211e3e-c606-4fdf-b45f-ff256f6cd9f4\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" Mar 12 13:46:49.522998 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.522827 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5lhl\" (UniqueName: \"kubernetes.io/projected/e8211e3e-c606-4fdf-b45f-ff256f6cd9f4-kube-api-access-g5lhl\") pod \"cert-manager-webhook-597b96b99b-2fj9h\" (UID: \"e8211e3e-c606-4fdf-b45f-ff256f6cd9f4\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" Mar 12 13:46:49.623614 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.623572 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g5lhl\" (UniqueName: \"kubernetes.io/projected/e8211e3e-c606-4fdf-b45f-ff256f6cd9f4-kube-api-access-g5lhl\") pod \"cert-manager-webhook-597b96b99b-2fj9h\" (UID: \"e8211e3e-c606-4fdf-b45f-ff256f6cd9f4\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" Mar 12 13:46:49.623856 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.623651 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8211e3e-c606-4fdf-b45f-ff256f6cd9f4-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-2fj9h\" (UID: \"e8211e3e-c606-4fdf-b45f-ff256f6cd9f4\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" Mar 12 13:46:49.636066 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.636033 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8211e3e-c606-4fdf-b45f-ff256f6cd9f4-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-2fj9h\" (UID: \"e8211e3e-c606-4fdf-b45f-ff256f6cd9f4\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" Mar 12 13:46:49.637082 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.637061 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5lhl\" (UniqueName: \"kubernetes.io/projected/e8211e3e-c606-4fdf-b45f-ff256f6cd9f4-kube-api-access-g5lhl\") pod \"cert-manager-webhook-597b96b99b-2fj9h\" (UID: \"e8211e3e-c606-4fdf-b45f-ff256f6cd9f4\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" Mar 12 13:46:49.769764 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.769672 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" Mar 12 13:46:49.909960 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:49.909924 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-2fj9h"] Mar 12 13:46:49.914706 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:46:49.914676 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8211e3e_c606_4fdf_b45f_ff256f6cd9f4.slice/crio-a28df3358253ecac43d63808b6b6772c95eba864888d7be67cedfb164e191fc6 WatchSource:0}: Error finding container a28df3358253ecac43d63808b6b6772c95eba864888d7be67cedfb164e191fc6: Status 404 returned error can't find the container with id a28df3358253ecac43d63808b6b6772c95eba864888d7be67cedfb164e191fc6 Mar 12 13:46:50.291379 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:50.291343 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" event={"ID":"e8211e3e-c606-4fdf-b45f-ff256f6cd9f4","Type":"ContainerStarted","Data":"a28df3358253ecac43d63808b6b6772c95eba864888d7be67cedfb164e191fc6"} Mar 12 13:46:52.156753 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.156721 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-8w8d7"] Mar 12 13:46:52.160217 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.160192 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" Mar 12 13:46:52.163254 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.163222 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-rgfnq\"" Mar 12 13:46:52.173436 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.173409 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-8w8d7"] Mar 12 13:46:52.243445 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.243406 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r2lc\" (UniqueName: \"kubernetes.io/projected/bba30c46-b14d-44ec-9470-23c181f637b5-kube-api-access-7r2lc\") pod \"cert-manager-cainjector-8966b78d4-8w8d7\" (UID: \"bba30c46-b14d-44ec-9470-23c181f637b5\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" Mar 12 13:46:52.243653 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.243467 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bba30c46-b14d-44ec-9470-23c181f637b5-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-8w8d7\" (UID: \"bba30c46-b14d-44ec-9470-23c181f637b5\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" Mar 12 13:46:52.344689 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.344646 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7r2lc\" (UniqueName: \"kubernetes.io/projected/bba30c46-b14d-44ec-9470-23c181f637b5-kube-api-access-7r2lc\") pod \"cert-manager-cainjector-8966b78d4-8w8d7\" (UID: \"bba30c46-b14d-44ec-9470-23c181f637b5\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" Mar 12 13:46:52.344880 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.344722 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bba30c46-b14d-44ec-9470-23c181f637b5-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-8w8d7\" (UID: \"bba30c46-b14d-44ec-9470-23c181f637b5\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" Mar 12 13:46:52.359683 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.359652 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r2lc\" (UniqueName: \"kubernetes.io/projected/bba30c46-b14d-44ec-9470-23c181f637b5-kube-api-access-7r2lc\") pod \"cert-manager-cainjector-8966b78d4-8w8d7\" (UID: \"bba30c46-b14d-44ec-9470-23c181f637b5\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" Mar 12 13:46:52.367512 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.367472 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bba30c46-b14d-44ec-9470-23c181f637b5-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-8w8d7\" (UID: \"bba30c46-b14d-44ec-9470-23c181f637b5\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" Mar 12 13:46:52.470387 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.470303 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" Mar 12 13:46:52.642744 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:46:52.642713 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbba30c46_b14d_44ec_9470_23c181f637b5.slice/crio-e6b7ae7f8a82a78d5ee5abface79d0fb3ffdc0a9a71b762a058a29b0719d1b5e WatchSource:0}: Error finding container e6b7ae7f8a82a78d5ee5abface79d0fb3ffdc0a9a71b762a058a29b0719d1b5e: Status 404 returned error can't find the container with id e6b7ae7f8a82a78d5ee5abface79d0fb3ffdc0a9a71b762a058a29b0719d1b5e Mar 12 13:46:52.643919 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:52.643900 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-8w8d7"] Mar 12 13:46:53.302128 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:53.302083 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" event={"ID":"e8211e3e-c606-4fdf-b45f-ff256f6cd9f4","Type":"ContainerStarted","Data":"5236be8d606f48014cbbe98a2e79fb3f5c6ec610f28975b4bd8d22320fd8b648"} Mar 12 13:46:53.302674 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:53.302309 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" Mar 12 13:46:53.303566 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:53.303542 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" event={"ID":"bba30c46-b14d-44ec-9470-23c181f637b5","Type":"ContainerStarted","Data":"8bef88eb47bba496af8204c1ae816e0f3e23ffa1a7530f386d661521f395d3cf"} Mar 12 13:46:53.303682 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:53.303571 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" event={"ID":"bba30c46-b14d-44ec-9470-23c181f637b5","Type":"ContainerStarted","Data":"e6b7ae7f8a82a78d5ee5abface79d0fb3ffdc0a9a71b762a058a29b0719d1b5e"} Mar 12 13:46:53.370254 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:53.370198 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" podStartSLOduration=1.793821974 podStartE2EDuration="4.370180763s" podCreationTimestamp="2026-03-12 13:46:49 +0000 UTC" firstStartedPulling="2026-03-12 13:46:49.916730057 +0000 UTC m=+490.692193678" lastFinishedPulling="2026-03-12 13:46:52.493088842 +0000 UTC m=+493.268552467" observedRunningTime="2026-03-12 13:46:53.333904693 +0000 UTC m=+494.109368553" watchObservedRunningTime="2026-03-12 13:46:53.370180763 +0000 UTC m=+494.145644405" Mar 12 13:46:53.371181 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:53.371146 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-8w8d7" podStartSLOduration=1.371133736 podStartE2EDuration="1.371133736s" podCreationTimestamp="2026-03-12 13:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:46:53.369323473 +0000 UTC m=+494.144787137" watchObservedRunningTime="2026-03-12 13:46:53.371133736 +0000 UTC m=+494.146597379" Mar 12 13:46:58.267456 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.267416 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-88ppd"] Mar 12 13:46:58.270326 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.270306 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-88ppd" Mar 12 13:46:58.272419 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.272393 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-fsvq4\"" Mar 12 13:46:58.278194 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.278153 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-88ppd"] Mar 12 13:46:58.388086 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.388052 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2cf6be3f-db5a-4db3-a09e-e22c78e5f56e-bound-sa-token\") pod \"cert-manager-759f64656b-88ppd\" (UID: \"2cf6be3f-db5a-4db3-a09e-e22c78e5f56e\") " pod="cert-manager/cert-manager-759f64656b-88ppd" Mar 12 13:46:58.388271 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.388113 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8q7\" (UniqueName: \"kubernetes.io/projected/2cf6be3f-db5a-4db3-a09e-e22c78e5f56e-kube-api-access-qk8q7\") pod \"cert-manager-759f64656b-88ppd\" (UID: \"2cf6be3f-db5a-4db3-a09e-e22c78e5f56e\") " pod="cert-manager/cert-manager-759f64656b-88ppd" Mar 12 13:46:58.488988 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.488948 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qk8q7\" (UniqueName: \"kubernetes.io/projected/2cf6be3f-db5a-4db3-a09e-e22c78e5f56e-kube-api-access-qk8q7\") pod \"cert-manager-759f64656b-88ppd\" (UID: \"2cf6be3f-db5a-4db3-a09e-e22c78e5f56e\") " pod="cert-manager/cert-manager-759f64656b-88ppd" Mar 12 13:46:58.489158 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.489000 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2cf6be3f-db5a-4db3-a09e-e22c78e5f56e-bound-sa-token\") pod \"cert-manager-759f64656b-88ppd\" (UID: \"2cf6be3f-db5a-4db3-a09e-e22c78e5f56e\") " pod="cert-manager/cert-manager-759f64656b-88ppd" Mar 12 13:46:58.498833 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.498800 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2cf6be3f-db5a-4db3-a09e-e22c78e5f56e-bound-sa-token\") pod \"cert-manager-759f64656b-88ppd\" (UID: \"2cf6be3f-db5a-4db3-a09e-e22c78e5f56e\") " pod="cert-manager/cert-manager-759f64656b-88ppd" Mar 12 13:46:58.499934 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.499911 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk8q7\" (UniqueName: \"kubernetes.io/projected/2cf6be3f-db5a-4db3-a09e-e22c78e5f56e-kube-api-access-qk8q7\") pod \"cert-manager-759f64656b-88ppd\" (UID: \"2cf6be3f-db5a-4db3-a09e-e22c78e5f56e\") " pod="cert-manager/cert-manager-759f64656b-88ppd" Mar 12 13:46:58.581435 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.581399 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-88ppd" Mar 12 13:46:58.711269 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:58.711234 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-88ppd"] Mar 12 13:46:58.714415 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:46:58.714381 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cf6be3f_db5a_4db3_a09e_e22c78e5f56e.slice/crio-24f94ab5d3ba06252eaf73fc886f23c172783d3e4a22269d5da3c56d97b83009 WatchSource:0}: Error finding container 24f94ab5d3ba06252eaf73fc886f23c172783d3e4a22269d5da3c56d97b83009: Status 404 returned error can't find the container with id 24f94ab5d3ba06252eaf73fc886f23c172783d3e4a22269d5da3c56d97b83009 Mar 12 13:46:59.309538 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.309498 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-2fj9h" Mar 12 13:46:59.321318 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.321285 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-88ppd" event={"ID":"2cf6be3f-db5a-4db3-a09e-e22c78e5f56e","Type":"ContainerStarted","Data":"ea8cea9fa002f355ba3e9e09bad43c4fdf06f381c1c529de922604b104faf671"} Mar 12 13:46:59.321318 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.321320 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-88ppd" event={"ID":"2cf6be3f-db5a-4db3-a09e-e22c78e5f56e","Type":"ContainerStarted","Data":"24f94ab5d3ba06252eaf73fc886f23c172783d3e4a22269d5da3c56d97b83009"} Mar 12 13:46:59.343355 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.343294 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-88ppd" podStartSLOduration=1.343269997 podStartE2EDuration="1.343269997s" podCreationTimestamp="2026-03-12 13:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:46:59.340937151 +0000 UTC m=+500.116400795" watchObservedRunningTime="2026-03-12 13:46:59.343269997 +0000 UTC m=+500.118733641" Mar 12 13:46:59.693502 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.693424 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7"] Mar 12 13:46:59.696750 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.696729 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:46:59.699056 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.699031 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Mar 12 13:46:59.699056 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.699049 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Mar 12 13:46:59.699215 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.699081 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-98zp2\"" Mar 12 13:46:59.705017 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.704988 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7"] Mar 12 13:46:59.799299 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.799259 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-bundle\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:46:59.799299 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.799299 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gddkh\" (UniqueName: \"kubernetes.io/projected/50779e1b-ab5c-4a2e-8141-99af2d9296c8-kube-api-access-gddkh\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:46:59.799509 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.799336 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-util\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:46:59.900563 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.900524 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-bundle\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:46:59.900563 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.900569 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gddkh\" (UniqueName: \"kubernetes.io/projected/50779e1b-ab5c-4a2e-8141-99af2d9296c8-kube-api-access-gddkh\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:46:59.900884 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.900603 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-util\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:46:59.900974 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.900957 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-util\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:46:59.901058 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.901037 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-bundle\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:46:59.910085 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:46:59.910055 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gddkh\" (UniqueName: \"kubernetes.io/projected/50779e1b-ab5c-4a2e-8141-99af2d9296c8-kube-api-access-gddkh\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:47:00.007889 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:00.007779 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:47:00.140966 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:00.140930 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7"] Mar 12 13:47:00.145242 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:47:00.145210 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50779e1b_ab5c_4a2e_8141_99af2d9296c8.slice/crio-f0f86cae4f289854155ce8840b535b9069020ab6f1529524adc93fb2b076aaf6 WatchSource:0}: Error finding container f0f86cae4f289854155ce8840b535b9069020ab6f1529524adc93fb2b076aaf6: Status 404 returned error can't find the container with id f0f86cae4f289854155ce8840b535b9069020ab6f1529524adc93fb2b076aaf6 Mar 12 13:47:00.325494 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:00.325455 2570 generic.go:358] "Generic (PLEG): container finished" podID="50779e1b-ab5c-4a2e-8141-99af2d9296c8" containerID="496f07d2664f634b45ea7c2fe61b7ed1042debe72d74233fc467915335226af6" exitCode=0 Mar 12 13:47:00.325994 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:00.325501 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" event={"ID":"50779e1b-ab5c-4a2e-8141-99af2d9296c8","Type":"ContainerDied","Data":"496f07d2664f634b45ea7c2fe61b7ed1042debe72d74233fc467915335226af6"} Mar 12 13:47:00.325994 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:00.325538 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" event={"ID":"50779e1b-ab5c-4a2e-8141-99af2d9296c8","Type":"ContainerStarted","Data":"f0f86cae4f289854155ce8840b535b9069020ab6f1529524adc93fb2b076aaf6"} Mar 12 13:47:02.333599 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:02.333557 2570 generic.go:358] "Generic (PLEG): container finished" podID="50779e1b-ab5c-4a2e-8141-99af2d9296c8" containerID="e1bb9ebbdc07fdb9209e409eb932c4302a241215a1c4df6adaebce9a1204a31a" exitCode=0 Mar 12 13:47:02.334024 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:02.333694 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" event={"ID":"50779e1b-ab5c-4a2e-8141-99af2d9296c8","Type":"ContainerDied","Data":"e1bb9ebbdc07fdb9209e409eb932c4302a241215a1c4df6adaebce9a1204a31a"} Mar 12 13:47:03.339194 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:03.339159 2570 generic.go:358] "Generic (PLEG): container finished" podID="50779e1b-ab5c-4a2e-8141-99af2d9296c8" containerID="aeafd229d39768e3ef6fada6bdf7be59aedb9d9b50476cb5d08383f93bcc34cd" exitCode=0 Mar 12 13:47:03.339768 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:03.339233 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" event={"ID":"50779e1b-ab5c-4a2e-8141-99af2d9296c8","Type":"ContainerDied","Data":"aeafd229d39768e3ef6fada6bdf7be59aedb9d9b50476cb5d08383f93bcc34cd"} Mar 12 13:47:04.468198 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:04.468174 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:47:04.541853 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:04.541790 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gddkh\" (UniqueName: \"kubernetes.io/projected/50779e1b-ab5c-4a2e-8141-99af2d9296c8-kube-api-access-gddkh\") pod \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " Mar 12 13:47:04.541853 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:04.541848 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-bundle\") pod \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " Mar 12 13:47:04.542115 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:04.541911 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-util\") pod \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\" (UID: \"50779e1b-ab5c-4a2e-8141-99af2d9296c8\") " Mar 12 13:47:04.542288 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:04.542250 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-bundle" (OuterVolumeSpecName: "bundle") pod "50779e1b-ab5c-4a2e-8141-99af2d9296c8" (UID: "50779e1b-ab5c-4a2e-8141-99af2d9296c8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 12 13:47:04.544167 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:04.544131 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50779e1b-ab5c-4a2e-8141-99af2d9296c8-kube-api-access-gddkh" (OuterVolumeSpecName: "kube-api-access-gddkh") pod "50779e1b-ab5c-4a2e-8141-99af2d9296c8" (UID: "50779e1b-ab5c-4a2e-8141-99af2d9296c8"). InnerVolumeSpecName "kube-api-access-gddkh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 13:47:04.549561 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:04.549523 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-util" (OuterVolumeSpecName: "util") pod "50779e1b-ab5c-4a2e-8141-99af2d9296c8" (UID: "50779e1b-ab5c-4a2e-8141-99af2d9296c8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 12 13:47:04.643399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:04.643290 2570 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-util\") on node \"ip-10-0-142-111.ec2.internal\" DevicePath \"\"" Mar 12 13:47:04.643399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:04.643336 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gddkh\" (UniqueName: \"kubernetes.io/projected/50779e1b-ab5c-4a2e-8141-99af2d9296c8-kube-api-access-gddkh\") on node \"ip-10-0-142-111.ec2.internal\" DevicePath \"\"" Mar 12 13:47:04.643399 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:04.643350 2570 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50779e1b-ab5c-4a2e-8141-99af2d9296c8-bundle\") on node \"ip-10-0-142-111.ec2.internal\" DevicePath \"\"" Mar 12 13:47:05.347012 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:05.346973 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" event={"ID":"50779e1b-ab5c-4a2e-8141-99af2d9296c8","Type":"ContainerDied","Data":"f0f86cae4f289854155ce8840b535b9069020ab6f1529524adc93fb2b076aaf6"} Mar 12 13:47:05.347012 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:05.347008 2570 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0f86cae4f289854155ce8840b535b9069020ab6f1529524adc93fb2b076aaf6" Mar 12 13:47:05.347272 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:05.347041 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78ef7sx7" Mar 12 13:47:11.299031 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.298994 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-jobset-operator/jobset-operator-747c5859c7-m6glh"] Mar 12 13:47:11.299444 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.299268 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50779e1b-ab5c-4a2e-8141-99af2d9296c8" containerName="extract" Mar 12 13:47:11.299444 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.299278 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="50779e1b-ab5c-4a2e-8141-99af2d9296c8" containerName="extract" Mar 12 13:47:11.299444 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.299288 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50779e1b-ab5c-4a2e-8141-99af2d9296c8" containerName="util" Mar 12 13:47:11.299444 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.299293 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="50779e1b-ab5c-4a2e-8141-99af2d9296c8" containerName="util" Mar 12 13:47:11.299444 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.299300 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50779e1b-ab5c-4a2e-8141-99af2d9296c8" containerName="pull" Mar 12 13:47:11.299444 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.299305 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="50779e1b-ab5c-4a2e-8141-99af2d9296c8" containerName="pull" Mar 12 13:47:11.299444 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.299348 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="50779e1b-ab5c-4a2e-8141-99af2d9296c8" containerName="extract" Mar 12 13:47:11.303296 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.303276 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" Mar 12 13:47:11.305724 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.305696 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-jobset-operator\"/\"openshift-service-ca.crt\"" Mar 12 13:47:11.305882 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.305758 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-jobset-operator\"/\"kube-root-ca.crt\"" Mar 12 13:47:11.306608 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.306585 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-jobset-operator\"/\"jobset-operator-dockercfg-dsdgf\"" Mar 12 13:47:11.315702 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.315671 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-jobset-operator/jobset-operator-747c5859c7-m6glh"] Mar 12 13:47:11.397080 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.397045 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/68b38009-a556-4452-a6df-2a959af71057-tmp\") pod \"jobset-operator-747c5859c7-m6glh\" (UID: \"68b38009-a556-4452-a6df-2a959af71057\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" Mar 12 13:47:11.397279 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.397099 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2xd\" (UniqueName: \"kubernetes.io/projected/68b38009-a556-4452-a6df-2a959af71057-kube-api-access-vd2xd\") pod \"jobset-operator-747c5859c7-m6glh\" (UID: \"68b38009-a556-4452-a6df-2a959af71057\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" Mar 12 13:47:11.498119 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.498073 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vd2xd\" (UniqueName: \"kubernetes.io/projected/68b38009-a556-4452-a6df-2a959af71057-kube-api-access-vd2xd\") pod \"jobset-operator-747c5859c7-m6glh\" (UID: \"68b38009-a556-4452-a6df-2a959af71057\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" Mar 12 13:47:11.498302 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.498160 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/68b38009-a556-4452-a6df-2a959af71057-tmp\") pod \"jobset-operator-747c5859c7-m6glh\" (UID: \"68b38009-a556-4452-a6df-2a959af71057\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" Mar 12 13:47:11.498576 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.498557 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/68b38009-a556-4452-a6df-2a959af71057-tmp\") pod \"jobset-operator-747c5859c7-m6glh\" (UID: \"68b38009-a556-4452-a6df-2a959af71057\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" Mar 12 13:47:11.507268 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.507226 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd2xd\" (UniqueName: \"kubernetes.io/projected/68b38009-a556-4452-a6df-2a959af71057-kube-api-access-vd2xd\") pod \"jobset-operator-747c5859c7-m6glh\" (UID: \"68b38009-a556-4452-a6df-2a959af71057\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" Mar 12 13:47:11.612824 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.612720 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" Mar 12 13:47:11.754753 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:11.754728 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-jobset-operator/jobset-operator-747c5859c7-m6glh"] Mar 12 13:47:11.757114 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:47:11.757087 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68b38009_a556_4452_a6df_2a959af71057.slice/crio-43aa042d7d6a3a0459307dbcd4a127d24b94f7d455e0b14fefb6a1fd8054ce59 WatchSource:0}: Error finding container 43aa042d7d6a3a0459307dbcd4a127d24b94f7d455e0b14fefb6a1fd8054ce59: Status 404 returned error can't find the container with id 43aa042d7d6a3a0459307dbcd4a127d24b94f7d455e0b14fefb6a1fd8054ce59 Mar 12 13:47:12.373166 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:12.373131 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" event={"ID":"68b38009-a556-4452-a6df-2a959af71057","Type":"ContainerStarted","Data":"43aa042d7d6a3a0459307dbcd4a127d24b94f7d455e0b14fefb6a1fd8054ce59"} Mar 12 13:47:14.382819 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:14.382774 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" event={"ID":"68b38009-a556-4452-a6df-2a959af71057","Type":"ContainerStarted","Data":"c013bb4fa2a7704bdbd6ed75a42c24ec2379d169945bb2d12e44fa3c444f3257"} Mar 12 13:47:14.401152 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:14.401098 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-jobset-operator/jobset-operator-747c5859c7-m6glh" podStartSLOduration=1.260907674 podStartE2EDuration="3.401079943s" podCreationTimestamp="2026-03-12 13:47:11 +0000 UTC" firstStartedPulling="2026-03-12 13:47:11.758614764 +0000 UTC m=+512.534078384" lastFinishedPulling="2026-03-12 13:47:13.898787031 +0000 UTC m=+514.674250653" observedRunningTime="2026-03-12 13:47:14.400298446 +0000 UTC m=+515.175762093" watchObservedRunningTime="2026-03-12 13:47:14.401079943 +0000 UTC m=+515.176543585" Mar 12 13:47:30.546956 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.546866 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff"] Mar 12 13:47:30.550165 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.550145 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.553247 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.553223 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-jobset-operator\"/\"webhook-server-cert\"" Mar 12 13:47:30.553802 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.553783 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-jobset-operator\"/\"jobset-controller-manager-dockercfg-hv4cv\"" Mar 12 13:47:30.554557 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.554536 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-jobset-operator\"/\"metrics-server-cert\"" Mar 12 13:47:30.567151 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.567125 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-jobset-operator\"/\"jobset-manager-config\"" Mar 12 13:47:30.571481 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.571453 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff"] Mar 12 13:47:30.640141 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.640102 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4477d716-bfdf-4e28-ba49-5ed0459e9647-manager-config\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.640333 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.640151 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4477d716-bfdf-4e28-ba49-5ed0459e9647-cert\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.640333 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.640218 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t624n\" (UniqueName: \"kubernetes.io/projected/4477d716-bfdf-4e28-ba49-5ed0459e9647-kube-api-access-t624n\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.640333 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.640266 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4477d716-bfdf-4e28-ba49-5ed0459e9647-metrics-certs\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.740869 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.740821 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4477d716-bfdf-4e28-ba49-5ed0459e9647-manager-config\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.740869 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.740864 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4477d716-bfdf-4e28-ba49-5ed0459e9647-cert\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.741078 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.740895 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t624n\" (UniqueName: \"kubernetes.io/projected/4477d716-bfdf-4e28-ba49-5ed0459e9647-kube-api-access-t624n\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.741078 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.740946 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4477d716-bfdf-4e28-ba49-5ed0459e9647-metrics-certs\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.741611 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.741589 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4477d716-bfdf-4e28-ba49-5ed0459e9647-manager-config\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.743986 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.743958 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4477d716-bfdf-4e28-ba49-5ed0459e9647-cert\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.744114 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.744068 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4477d716-bfdf-4e28-ba49-5ed0459e9647-metrics-certs\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.753328 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.753291 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t624n\" (UniqueName: \"kubernetes.io/projected/4477d716-bfdf-4e28-ba49-5ed0459e9647-kube-api-access-t624n\") pod \"jobset-controller-manager-84bcf99f68-sf9ff\" (UID: \"4477d716-bfdf-4e28-ba49-5ed0459e9647\") " pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.860337 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.860237 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:30.999686 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:30.999651 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff"] Mar 12 13:47:31.002829 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:47:31.002803 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4477d716_bfdf_4e28_ba49_5ed0459e9647.slice/crio-b1ad4b913404309e098b85f55d4cfda704e0ecb1d58d9fe6417080f6894f60ad WatchSource:0}: Error finding container b1ad4b913404309e098b85f55d4cfda704e0ecb1d58d9fe6417080f6894f60ad: Status 404 returned error can't find the container with id b1ad4b913404309e098b85f55d4cfda704e0ecb1d58d9fe6417080f6894f60ad Mar 12 13:47:31.437571 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:31.437530 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" event={"ID":"4477d716-bfdf-4e28-ba49-5ed0459e9647","Type":"ContainerStarted","Data":"b1ad4b913404309e098b85f55d4cfda704e0ecb1d58d9fe6417080f6894f60ad"} Mar 12 13:47:34.450033 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:34.449996 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" event={"ID":"4477d716-bfdf-4e28-ba49-5ed0459e9647","Type":"ContainerStarted","Data":"ca95b1b3e74f4abde2de1604b3eb07c90da076f81594dba5ba2fcffad484b1d9"} Mar 12 13:47:34.450464 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:34.450134 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:47:34.469279 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:34.469215 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" podStartSLOduration=1.514396769 podStartE2EDuration="4.469194335s" podCreationTimestamp="2026-03-12 13:47:30 +0000 UTC" firstStartedPulling="2026-03-12 13:47:31.004540282 +0000 UTC m=+531.780003906" lastFinishedPulling="2026-03-12 13:47:33.95933784 +0000 UTC m=+534.734801472" observedRunningTime="2026-03-12 13:47:34.467177493 +0000 UTC m=+535.242641135" watchObservedRunningTime="2026-03-12 13:47:34.469194335 +0000 UTC m=+535.244657979" Mar 12 13:47:45.458065 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:47:45.458037 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-jobset-operator/jobset-controller-manager-84bcf99f68-sf9ff" Mar 12 13:48:39.751661 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:48:39.751633 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:48:39.752798 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:48:39.752782 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:53:39.772034 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:53:39.772003 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:53:39.773936 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:53:39.773916 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:57:34.954110 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:34.954079 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw"] Mar 12 13:57:34.957383 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:34.957368 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" Mar 12 13:57:34.981240 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:34.981220 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"rhai-e2e-progression-nlrws\"/\"kube-root-ca.crt\"" Mar 12 13:57:34.981813 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:34.981799 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"rhai-e2e-progression-nlrws\"/\"openshift-service-ca.crt\"" Mar 12 13:57:34.986147 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:34.986134 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"rhai-e2e-progression-nlrws\"/\"default-dockercfg-bn65w\"" Mar 12 13:57:35.025668 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:35.025640 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw"] Mar 12 13:57:35.149995 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:35.149970 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fqn2\" (UniqueName: \"kubernetes.io/projected/0857d23c-97a4-447e-b886-873dadab2e11-kube-api-access-9fqn2\") pod \"progression-job-failure-node-0-0-xxjzw\" (UID: \"0857d23c-97a4-447e-b886-873dadab2e11\") " pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" Mar 12 13:57:35.250907 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:35.250839 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9fqn2\" (UniqueName: \"kubernetes.io/projected/0857d23c-97a4-447e-b886-873dadab2e11-kube-api-access-9fqn2\") pod \"progression-job-failure-node-0-0-xxjzw\" (UID: \"0857d23c-97a4-447e-b886-873dadab2e11\") " pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" Mar 12 13:57:35.258587 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:35.258563 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fqn2\" (UniqueName: \"kubernetes.io/projected/0857d23c-97a4-447e-b886-873dadab2e11-kube-api-access-9fqn2\") pod \"progression-job-failure-node-0-0-xxjzw\" (UID: \"0857d23c-97a4-447e-b886-873dadab2e11\") " pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" Mar 12 13:57:35.266396 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:35.266375 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" Mar 12 13:57:35.389292 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:35.389265 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw"] Mar 12 13:57:35.392471 ip-10-0-142-111 kubenswrapper[2570]: W0312 13:57:35.392441 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0857d23c_97a4_447e_b886_873dadab2e11.slice/crio-d0fca0508b4e3bb5839c15b1a699f29a84fde51352d4caed7573e0a769d571b3 WatchSource:0}: Error finding container d0fca0508b4e3bb5839c15b1a699f29a84fde51352d4caed7573e0a769d571b3: Status 404 returned error can't find the container with id d0fca0508b4e3bb5839c15b1a699f29a84fde51352d4caed7573e0a769d571b3 Mar 12 13:57:35.394354 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:35.394339 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:57:36.395514 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:57:36.395469 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" event={"ID":"0857d23c-97a4-447e-b886-873dadab2e11","Type":"ContainerStarted","Data":"d0fca0508b4e3bb5839c15b1a699f29a84fde51352d4caed7573e0a769d571b3"} Mar 12 13:59:15.963952 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:15.963923 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:59:15.964433 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:15.963979 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 13:59:16.732844 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:16.732814 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" event={"ID":"0857d23c-97a4-447e-b886-873dadab2e11","Type":"ContainerStarted","Data":"0451862902a6e0679c1e4c40ceec7fef1b4d673feaec06206483ffe716877516"} Mar 12 13:59:16.732956 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:16.732938 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" Mar 12 13:59:16.759464 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:16.759425 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" podStartSLOduration=1.598089922 podStartE2EDuration="1m42.759412529s" podCreationTimestamp="2026-03-12 13:57:34 +0000 UTC" firstStartedPulling="2026-03-12 13:57:35.39447049 +0000 UTC m=+1136.169934125" lastFinishedPulling="2026-03-12 13:59:16.555793103 +0000 UTC m=+1237.331256732" observedRunningTime="2026-03-12 13:59:16.758539381 +0000 UTC m=+1237.534003042" watchObservedRunningTime="2026-03-12 13:59:16.759412529 +0000 UTC m=+1237.534876171" Mar 12 13:59:18.739602 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:18.739569 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" Mar 12 13:59:21.737462 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:21.737413 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" podUID="0857d23c-97a4-447e-b886-873dadab2e11" containerName="node" probeResult="failure" output="Get \"http://10.134.0.18:28080/metrics\": dial tcp 10.134.0.18:28080: connect: connection refused" Mar 12 13:59:21.749361 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:21.749340 2570 generic.go:358] "Generic (PLEG): container finished" podID="0857d23c-97a4-447e-b886-873dadab2e11" containerID="0451862902a6e0679c1e4c40ceec7fef1b4d673feaec06206483ffe716877516" exitCode=1 Mar 12 13:59:21.749476 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:21.749408 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" event={"ID":"0857d23c-97a4-447e-b886-873dadab2e11","Type":"ContainerDied","Data":"0451862902a6e0679c1e4c40ceec7fef1b4d673feaec06206483ffe716877516"} Mar 12 13:59:22.873243 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:22.873222 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" Mar 12 13:59:22.954312 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:22.954276 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fqn2\" (UniqueName: \"kubernetes.io/projected/0857d23c-97a4-447e-b886-873dadab2e11-kube-api-access-9fqn2\") pod \"0857d23c-97a4-447e-b886-873dadab2e11\" (UID: \"0857d23c-97a4-447e-b886-873dadab2e11\") " Mar 12 13:59:22.956426 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:22.956398 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0857d23c-97a4-447e-b886-873dadab2e11-kube-api-access-9fqn2" (OuterVolumeSpecName: "kube-api-access-9fqn2") pod "0857d23c-97a4-447e-b886-873dadab2e11" (UID: "0857d23c-97a4-447e-b886-873dadab2e11"). InnerVolumeSpecName "kube-api-access-9fqn2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 13:59:23.055358 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:23.055325 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9fqn2\" (UniqueName: \"kubernetes.io/projected/0857d23c-97a4-447e-b886-873dadab2e11-kube-api-access-9fqn2\") on node \"ip-10-0-142-111.ec2.internal\" DevicePath \"\"" Mar 12 13:59:23.757225 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:23.757188 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" event={"ID":"0857d23c-97a4-447e-b886-873dadab2e11","Type":"ContainerDied","Data":"d0fca0508b4e3bb5839c15b1a699f29a84fde51352d4caed7573e0a769d571b3"} Mar 12 13:59:23.757225 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:23.757221 2570 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0fca0508b4e3bb5839c15b1a699f29a84fde51352d4caed7573e0a769d571b3" Mar 12 13:59:23.757394 ip-10-0-142-111 kubenswrapper[2570]: I0312 13:59:23.757246 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw" Mar 12 14:04:15.989528 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:04:15.989443 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 14:04:15.991424 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:04:15.991402 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 14:09:16.018918 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:16.018893 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 14:09:16.021122 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:16.021104 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-111.ec2.internal_0e4e8f3d30bf75c22161da0d94e78eb7/kube-rbac-proxy-crio/2.log" Mar 12 14:09:41.294085 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.294050 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd"] Mar 12 14:09:41.294488 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.294330 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0857d23c-97a4-447e-b886-873dadab2e11" containerName="node" Mar 12 14:09:41.294488 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.294343 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857d23c-97a4-447e-b886-873dadab2e11" containerName="node" Mar 12 14:09:41.294488 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.294386 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="0857d23c-97a4-447e-b886-873dadab2e11" containerName="node" Mar 12 14:09:41.297255 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.297237 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" Mar 12 14:09:41.299212 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.299185 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"rhai-e2e-progression-nlrws\"/\"openshift-service-ca.crt\"" Mar 12 14:09:41.299212 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.299201 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"rhai-e2e-progression-nlrws\"/\"kube-root-ca.crt\"" Mar 12 14:09:41.299362 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.299297 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"rhai-e2e-progression-nlrws\"/\"default-dockercfg-bn65w\"" Mar 12 14:09:41.307033 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.307009 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd"] Mar 12 14:09:41.421511 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.421484 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dwp5\" (UniqueName: \"kubernetes.io/projected/01f84abc-5c43-48c5-aeb1-de0fa757cdee-kube-api-access-2dwp5\") pod \"progression-custom-prestop-node-0-0-kp6wd\" (UID: \"01f84abc-5c43-48c5-aeb1-de0fa757cdee\") " pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" Mar 12 14:09:41.522146 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.522115 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2dwp5\" (UniqueName: \"kubernetes.io/projected/01f84abc-5c43-48c5-aeb1-de0fa757cdee-kube-api-access-2dwp5\") pod \"progression-custom-prestop-node-0-0-kp6wd\" (UID: \"01f84abc-5c43-48c5-aeb1-de0fa757cdee\") " pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" Mar 12 14:09:41.534060 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.534038 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dwp5\" (UniqueName: \"kubernetes.io/projected/01f84abc-5c43-48c5-aeb1-de0fa757cdee-kube-api-access-2dwp5\") pod \"progression-custom-prestop-node-0-0-kp6wd\" (UID: \"01f84abc-5c43-48c5-aeb1-de0fa757cdee\") " pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" Mar 12 14:09:41.606933 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.606908 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" Mar 12 14:09:41.726246 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.726219 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd"] Mar 12 14:09:41.729309 ip-10-0-142-111 kubenswrapper[2570]: W0312 14:09:41.729279 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01f84abc_5c43_48c5_aeb1_de0fa757cdee.slice/crio-a49dd7c556abc6adba32efcbe2a64e134c3b7875a44b534f84d76e3bb574e97f WatchSource:0}: Error finding container a49dd7c556abc6adba32efcbe2a64e134c3b7875a44b534f84d76e3bb574e97f: Status 404 returned error can't find the container with id a49dd7c556abc6adba32efcbe2a64e134c3b7875a44b534f84d76e3bb574e97f Mar 12 14:09:41.731141 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.731125 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:09:41.842827 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.842798 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" event={"ID":"01f84abc-5c43-48c5-aeb1-de0fa757cdee","Type":"ContainerStarted","Data":"e8a47e405dedce4529ff4b60babe2f78589d4f3d74b079a1b69e89e9ea0c5978"} Mar 12 14:09:41.842944 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.842834 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" event={"ID":"01f84abc-5c43-48c5-aeb1-de0fa757cdee","Type":"ContainerStarted","Data":"a49dd7c556abc6adba32efcbe2a64e134c3b7875a44b534f84d76e3bb574e97f"} Mar 12 14:09:41.843044 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.842964 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" Mar 12 14:09:41.859896 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:41.859827 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" podStartSLOduration=0.859812666 podStartE2EDuration="859.812666ms" podCreationTimestamp="2026-03-12 14:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:09:41.858509203 +0000 UTC m=+1862.633972857" watchObservedRunningTime="2026-03-12 14:09:41.859812666 +0000 UTC m=+1862.635276308" Mar 12 14:09:43.848354 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:43.848323 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" Mar 12 14:09:44.598124 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.598084 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wl6gd/must-gather-8p7zd"] Mar 12 14:09:44.601342 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.601323 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" Mar 12 14:09:44.603347 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.603327 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wl6gd\"/\"openshift-service-ca.crt\"" Mar 12 14:09:44.603450 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.603402 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-wl6gd\"/\"default-dockercfg-89hg9\"" Mar 12 14:09:44.603608 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.603592 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wl6gd\"/\"kube-root-ca.crt\"" Mar 12 14:09:44.611784 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.611765 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wl6gd/must-gather-8p7zd"] Mar 12 14:09:44.642191 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.642168 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26dv9\" (UniqueName: \"kubernetes.io/projected/7f3dbcf5-eff6-459e-b48f-bec84949feb8-kube-api-access-26dv9\") pod \"must-gather-8p7zd\" (UID: \"7f3dbcf5-eff6-459e-b48f-bec84949feb8\") " pod="openshift-must-gather-wl6gd/must-gather-8p7zd" Mar 12 14:09:44.642289 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.642222 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3dbcf5-eff6-459e-b48f-bec84949feb8-must-gather-output\") pod \"must-gather-8p7zd\" (UID: \"7f3dbcf5-eff6-459e-b48f-bec84949feb8\") " pod="openshift-must-gather-wl6gd/must-gather-8p7zd" Mar 12 14:09:44.743187 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.743165 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3dbcf5-eff6-459e-b48f-bec84949feb8-must-gather-output\") pod \"must-gather-8p7zd\" (UID: \"7f3dbcf5-eff6-459e-b48f-bec84949feb8\") " pod="openshift-must-gather-wl6gd/must-gather-8p7zd" Mar 12 14:09:44.743273 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.743194 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26dv9\" (UniqueName: \"kubernetes.io/projected/7f3dbcf5-eff6-459e-b48f-bec84949feb8-kube-api-access-26dv9\") pod \"must-gather-8p7zd\" (UID: \"7f3dbcf5-eff6-459e-b48f-bec84949feb8\") " pod="openshift-must-gather-wl6gd/must-gather-8p7zd" Mar 12 14:09:44.743468 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.743450 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3dbcf5-eff6-459e-b48f-bec84949feb8-must-gather-output\") pod \"must-gather-8p7zd\" (UID: \"7f3dbcf5-eff6-459e-b48f-bec84949feb8\") " pod="openshift-must-gather-wl6gd/must-gather-8p7zd" Mar 12 14:09:44.758802 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.758774 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26dv9\" (UniqueName: \"kubernetes.io/projected/7f3dbcf5-eff6-459e-b48f-bec84949feb8-kube-api-access-26dv9\") pod \"must-gather-8p7zd\" (UID: \"7f3dbcf5-eff6-459e-b48f-bec84949feb8\") " pod="openshift-must-gather-wl6gd/must-gather-8p7zd" Mar 12 14:09:44.910853 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:44.910802 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" Mar 12 14:09:45.028407 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:45.028377 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wl6gd/must-gather-8p7zd"] Mar 12 14:09:45.031926 ip-10-0-142-111 kubenswrapper[2570]: W0312 14:09:45.031899 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f3dbcf5_eff6_459e_b48f_bec84949feb8.slice/crio-8bd485d53cc0c4e9fa210f54592de81e29be9ffa05b15ceae115dd9d66b590ae WatchSource:0}: Error finding container 8bd485d53cc0c4e9fa210f54592de81e29be9ffa05b15ceae115dd9d66b590ae: Status 404 returned error can't find the container with id 8bd485d53cc0c4e9fa210f54592de81e29be9ffa05b15ceae115dd9d66b590ae Mar 12 14:09:45.856320 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:45.856278 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" event={"ID":"7f3dbcf5-eff6-459e-b48f-bec84949feb8","Type":"ContainerStarted","Data":"8bd485d53cc0c4e9fa210f54592de81e29be9ffa05b15ceae115dd9d66b590ae"} Mar 12 14:09:48.455291 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:48.455260 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd"] Mar 12 14:09:48.455954 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:48.455474 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" podUID="01f84abc-5c43-48c5-aeb1-de0fa757cdee" containerName="node" containerID="cri-o://e8a47e405dedce4529ff4b60babe2f78589d4f3d74b079a1b69e89e9ea0c5978" gracePeriod=30 Mar 12 14:09:48.487925 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:48.487900 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw"] Mar 12 14:09:48.489969 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:48.489943 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["rhai-e2e-progression-nlrws/progression-job-failure-node-0-0-xxjzw"] Mar 12 14:09:49.871202 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:49.871168 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0857d23c-97a4-447e-b886-873dadab2e11" path="/var/lib/kubelet/pods/0857d23c-97a4-447e-b886-873dadab2e11/volumes" Mar 12 14:09:50.876720 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:50.876634 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" event={"ID":"7f3dbcf5-eff6-459e-b48f-bec84949feb8","Type":"ContainerStarted","Data":"a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c"} Mar 12 14:09:50.876720 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:50.876673 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" event={"ID":"7f3dbcf5-eff6-459e-b48f-bec84949feb8","Type":"ContainerStarted","Data":"bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f"} Mar 12 14:09:50.897087 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:50.896868 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" podStartSLOduration=1.3498316510000001 podStartE2EDuration="6.896853881s" podCreationTimestamp="2026-03-12 14:09:44 +0000 UTC" firstStartedPulling="2026-03-12 14:09:45.033779932 +0000 UTC m=+1865.809243556" lastFinishedPulling="2026-03-12 14:09:50.580802149 +0000 UTC m=+1871.356265786" observedRunningTime="2026-03-12 14:09:50.894366937 +0000 UTC m=+1871.669830591" watchObservedRunningTime="2026-03-12 14:09:50.896853881 +0000 UTC m=+1871.672317523" Mar 12 14:09:57.847228 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:57.847192 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" podUID="01f84abc-5c43-48c5-aeb1-de0fa757cdee" containerName="node" probeResult="failure" output="Get \"http://10.134.0.19:28080/metrics\": dial tcp 10.134.0.19:28080: connect: connection refused" Mar 12 14:09:57.903650 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:57.903598 2570 generic.go:358] "Generic (PLEG): container finished" podID="01f84abc-5c43-48c5-aeb1-de0fa757cdee" containerID="e8a47e405dedce4529ff4b60babe2f78589d4f3d74b079a1b69e89e9ea0c5978" exitCode=0 Mar 12 14:09:57.903784 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:57.903682 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" event={"ID":"01f84abc-5c43-48c5-aeb1-de0fa757cdee","Type":"ContainerDied","Data":"e8a47e405dedce4529ff4b60babe2f78589d4f3d74b079a1b69e89e9ea0c5978"} Mar 12 14:09:58.003671 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:58.003646 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" Mar 12 14:09:58.142539 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:58.142470 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dwp5\" (UniqueName: \"kubernetes.io/projected/01f84abc-5c43-48c5-aeb1-de0fa757cdee-kube-api-access-2dwp5\") pod \"01f84abc-5c43-48c5-aeb1-de0fa757cdee\" (UID: \"01f84abc-5c43-48c5-aeb1-de0fa757cdee\") " Mar 12 14:09:58.144886 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:58.144851 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f84abc-5c43-48c5-aeb1-de0fa757cdee-kube-api-access-2dwp5" (OuterVolumeSpecName: "kube-api-access-2dwp5") pod "01f84abc-5c43-48c5-aeb1-de0fa757cdee" (UID: "01f84abc-5c43-48c5-aeb1-de0fa757cdee"). InnerVolumeSpecName "kube-api-access-2dwp5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 14:09:58.243854 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:58.243820 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2dwp5\" (UniqueName: \"kubernetes.io/projected/01f84abc-5c43-48c5-aeb1-de0fa757cdee-kube-api-access-2dwp5\") on node \"ip-10-0-142-111.ec2.internal\" DevicePath \"\"" Mar 12 14:09:58.909358 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:58.909324 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" event={"ID":"01f84abc-5c43-48c5-aeb1-de0fa757cdee","Type":"ContainerDied","Data":"a49dd7c556abc6adba32efcbe2a64e134c3b7875a44b534f84d76e3bb574e97f"} Mar 12 14:09:58.909358 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:58.909346 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd" Mar 12 14:09:58.909881 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:58.909379 2570 scope.go:117] "RemoveContainer" containerID="e8a47e405dedce4529ff4b60babe2f78589d4f3d74b079a1b69e89e9ea0c5978" Mar 12 14:09:58.931897 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:58.931869 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd"] Mar 12 14:09:58.935401 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:58.935377 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["rhai-e2e-progression-nlrws/progression-custom-prestop-node-0-0-kp6wd"] Mar 12 14:09:59.867131 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:09:59.867102 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f84abc-5c43-48c5-aeb1-de0fa757cdee" path="/var/lib/kubelet/pods/01f84abc-5c43-48c5-aeb1-de0fa757cdee/volumes" Mar 12 14:10:37.042285 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:37.042223 2570 generic.go:358] "Generic (PLEG): container finished" podID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" containerID="bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f" exitCode=0 Mar 12 14:10:37.042656 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:37.042302 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" event={"ID":"7f3dbcf5-eff6-459e-b48f-bec84949feb8","Type":"ContainerDied","Data":"bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f"} Mar 12 14:10:37.042656 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:37.042570 2570 scope.go:117] "RemoveContainer" containerID="bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f" Mar 12 14:10:37.803243 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:37.803213 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wl6gd_must-gather-8p7zd_7f3dbcf5-eff6-459e-b48f-bec84949feb8/gather/0.log" Mar 12 14:10:39.866933 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:39.866903 2570 scope.go:117] "RemoveContainer" containerID="0451862902a6e0679c1e4c40ceec7fef1b4d673feaec06206483ffe716877516" Mar 12 14:10:41.080257 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:41.080213 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-p7x55_41d27fe4-a974-4c33-830e-130ef3c09adb/global-pull-secret-syncer/0.log" Mar 12 14:10:41.174298 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:41.174270 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-m8bsl_b2434af6-7e97-4039-9604-9310288bca08/konnectivity-agent/0.log" Mar 12 14:10:41.282584 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:41.282560 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-142-111.ec2.internal_56ea2cd715dd568d3d7e0aab566769bf/haproxy/0.log" Mar 12 14:10:43.142428 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.142388 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wl6gd/must-gather-8p7zd"] Mar 12 14:10:43.142996 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.142611 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" podUID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" containerName="copy" containerID="cri-o://a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c" gracePeriod=2 Mar 12 14:10:43.148102 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.148002 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wl6gd/must-gather-8p7zd"] Mar 12 14:10:43.383466 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.383446 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wl6gd_must-gather-8p7zd_7f3dbcf5-eff6-459e-b48f-bec84949feb8/copy/0.log" Mar 12 14:10:43.383847 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.383832 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" Mar 12 14:10:43.385248 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.385223 2570 status_manager.go:895] "Failed to get status for pod" podUID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" err="pods \"must-gather-8p7zd\" is forbidden: User \"system:node:ip-10-0-142-111.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wl6gd\": no relationship found between node 'ip-10-0-142-111.ec2.internal' and this object" Mar 12 14:10:43.398601 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.398556 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26dv9\" (UniqueName: \"kubernetes.io/projected/7f3dbcf5-eff6-459e-b48f-bec84949feb8-kube-api-access-26dv9\") pod \"7f3dbcf5-eff6-459e-b48f-bec84949feb8\" (UID: \"7f3dbcf5-eff6-459e-b48f-bec84949feb8\") " Mar 12 14:10:43.398710 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.398602 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3dbcf5-eff6-459e-b48f-bec84949feb8-must-gather-output\") pod \"7f3dbcf5-eff6-459e-b48f-bec84949feb8\" (UID: \"7f3dbcf5-eff6-459e-b48f-bec84949feb8\") " Mar 12 14:10:43.399707 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.399680 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f3dbcf5-eff6-459e-b48f-bec84949feb8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7f3dbcf5-eff6-459e-b48f-bec84949feb8" (UID: "7f3dbcf5-eff6-459e-b48f-bec84949feb8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 12 14:10:43.400813 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.400795 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3dbcf5-eff6-459e-b48f-bec84949feb8-kube-api-access-26dv9" (OuterVolumeSpecName: "kube-api-access-26dv9") pod "7f3dbcf5-eff6-459e-b48f-bec84949feb8" (UID: "7f3dbcf5-eff6-459e-b48f-bec84949feb8"). InnerVolumeSpecName "kube-api-access-26dv9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 14:10:43.499276 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.499252 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26dv9\" (UniqueName: \"kubernetes.io/projected/7f3dbcf5-eff6-459e-b48f-bec84949feb8-kube-api-access-26dv9\") on node \"ip-10-0-142-111.ec2.internal\" DevicePath \"\"" Mar 12 14:10:43.499276 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.499273 2570 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3dbcf5-eff6-459e-b48f-bec84949feb8-must-gather-output\") on node \"ip-10-0-142-111.ec2.internal\" DevicePath \"\"" Mar 12 14:10:43.867799 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.867768 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" path="/var/lib/kubelet/pods/7f3dbcf5-eff6-459e-b48f-bec84949feb8/volumes" Mar 12 14:10:43.876905 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.876886 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3f31c767-6746-4585-8144-952def904ca1/alertmanager/0.log" Mar 12 14:10:43.892571 ip-10-0-142-111 kubenswrapper[2570]: E0312 14:10:43.892549 2570 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f3dbcf5_eff6_459e_b48f_bec84949feb8.slice/crio-8bd485d53cc0c4e9fa210f54592de81e29be9ffa05b15ceae115dd9d66b590ae\": RecentStats: unable to find data in memory cache]" Mar 12 14:10:43.913799 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.913777 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3f31c767-6746-4585-8144-952def904ca1/config-reloader/0.log" Mar 12 14:10:43.965591 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:43.965570 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3f31c767-6746-4585-8144-952def904ca1/kube-rbac-proxy-web/0.log" Mar 12 14:10:44.006335 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.006299 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3f31c767-6746-4585-8144-952def904ca1/kube-rbac-proxy/0.log" Mar 12 14:10:44.053651 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.053631 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3f31c767-6746-4585-8144-952def904ca1/kube-rbac-proxy-metric/0.log" Mar 12 14:10:44.065880 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.065865 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wl6gd_must-gather-8p7zd_7f3dbcf5-eff6-459e-b48f-bec84949feb8/copy/0.log" Mar 12 14:10:44.066190 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.066170 2570 generic.go:358] "Generic (PLEG): container finished" podID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" containerID="a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c" exitCode=143 Mar 12 14:10:44.066258 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.066217 2570 scope.go:117] "RemoveContainer" containerID="a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c" Mar 12 14:10:44.066297 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.066221 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wl6gd/must-gather-8p7zd" Mar 12 14:10:44.073754 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.073738 2570 scope.go:117] "RemoveContainer" containerID="bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f" Mar 12 14:10:44.092404 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.092390 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3f31c767-6746-4585-8144-952def904ca1/prom-label-proxy/0.log" Mar 12 14:10:44.113079 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.113063 2570 scope.go:117] "RemoveContainer" containerID="a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c" Mar 12 14:10:44.113344 ip-10-0-142-111 kubenswrapper[2570]: E0312 14:10:44.113315 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c\": container with ID starting with a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c not found: ID does not exist" containerID="a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c" Mar 12 14:10:44.113395 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.113344 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c"} err="failed to get container status \"a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c\": rpc error: code = NotFound desc = could not find container \"a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c\": container with ID starting with a47d34e070cd2864f2a8921c2c375fac9aef53f7427cad9f21c775fc7f6b694c not found: ID does not exist" Mar 12 14:10:44.113395 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.113376 2570 scope.go:117] "RemoveContainer" containerID="bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f" Mar 12 14:10:44.113604 ip-10-0-142-111 kubenswrapper[2570]: E0312 14:10:44.113589 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f\": container with ID starting with bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f not found: ID does not exist" containerID="bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f" Mar 12 14:10:44.113667 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.113609 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f"} err="failed to get container status \"bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f\": rpc error: code = NotFound desc = could not find container \"bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f\": container with ID starting with bf3e865d5fed7576b603052f8de6c25d511c56ef04422c8ab41f5d801eb73d3f not found: ID does not exist" Mar 12 14:10:44.126331 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.126287 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3f31c767-6746-4585-8144-952def904ca1/init-config-reloader/0.log" Mar 12 14:10:44.457322 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.457279 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-ttmnc_22f063c9-1f02-4784-95c6-b1d60a5bc9cb/node-exporter/0.log" Mar 12 14:10:44.484784 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.484766 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-ttmnc_22f063c9-1f02-4784-95c6-b1d60a5bc9cb/kube-rbac-proxy/0.log" Mar 12 14:10:44.507788 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:44.507774 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-ttmnc_22f063c9-1f02-4784-95c6-b1d60a5bc9cb/init-textfile/0.log" Mar 12 14:10:47.710849 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.710789 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l"] Mar 12 14:10:47.711233 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.711096 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01f84abc-5c43-48c5-aeb1-de0fa757cdee" containerName="node" Mar 12 14:10:47.711233 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.711113 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f84abc-5c43-48c5-aeb1-de0fa757cdee" containerName="node" Mar 12 14:10:47.711233 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.711136 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" containerName="gather" Mar 12 14:10:47.711233 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.711142 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" containerName="gather" Mar 12 14:10:47.711233 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.711152 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" containerName="copy" Mar 12 14:10:47.711233 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.711159 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" containerName="copy" Mar 12 14:10:47.711233 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.711205 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="01f84abc-5c43-48c5-aeb1-de0fa757cdee" containerName="node" Mar 12 14:10:47.711233 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.711212 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" containerName="gather" Mar 12 14:10:47.711233 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.711218 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f3dbcf5-eff6-459e-b48f-bec84949feb8" containerName="copy" Mar 12 14:10:47.716358 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.716332 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.718259 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.718230 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-nssb5\"/\"kube-root-ca.crt\"" Mar 12 14:10:47.718383 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.718280 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-nssb5\"/\"default-dockercfg-5x56g\"" Mar 12 14:10:47.718383 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.718286 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-nssb5\"/\"openshift-service-ca.crt\"" Mar 12 14:10:47.720660 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.720609 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l"] Mar 12 14:10:47.729150 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.729123 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5r2\" (UniqueName: \"kubernetes.io/projected/6ade0323-4cc2-497d-b568-72126d4a13e4-kube-api-access-6x5r2\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.729261 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.729180 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-lib-modules\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.729306 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.729258 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-podres\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.729343 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.729308 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-proc\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.729404 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.729385 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-sys\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.830315 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.830288 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-sys\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.830424 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.830321 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6x5r2\" (UniqueName: \"kubernetes.io/projected/6ade0323-4cc2-497d-b568-72126d4a13e4-kube-api-access-6x5r2\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.830424 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.830348 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-lib-modules\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.830424 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.830378 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-podres\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.830424 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.830408 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-proc\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.830424 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.830416 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-sys\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.830665 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.830479 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-proc\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.830665 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.830506 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-lib-modules\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.830665 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.830508 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/6ade0323-4cc2-497d-b568-72126d4a13e4-podres\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:47.838667 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:47.838648 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x5r2\" (UniqueName: \"kubernetes.io/projected/6ade0323-4cc2-497d-b568-72126d4a13e4-kube-api-access-6x5r2\") pod \"perf-node-gather-daemonset-fcv4l\" (UID: \"6ade0323-4cc2-497d-b568-72126d4a13e4\") " pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:48.029322 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:48.029256 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:48.165386 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:48.165364 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l"] Mar 12 14:10:48.167902 ip-10-0-142-111 kubenswrapper[2570]: W0312 14:10:48.167872 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6ade0323_4cc2_497d_b568_72126d4a13e4.slice/crio-746d23afb39a6ae6afd60e08c64c86b92ab220c2884d11641001bc3abe297e38 WatchSource:0}: Error finding container 746d23afb39a6ae6afd60e08c64c86b92ab220c2884d11641001bc3abe297e38: Status 404 returned error can't find the container with id 746d23afb39a6ae6afd60e08c64c86b92ab220c2884d11641001bc3abe297e38 Mar 12 14:10:48.502267 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:48.502227 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-xzph6_ec4cf77c-b3ee-4a56-a3b4-73324af3351d/dns/0.log" Mar 12 14:10:48.522579 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:48.522549 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-xzph6_ec4cf77c-b3ee-4a56-a3b4-73324af3351d/kube-rbac-proxy/0.log" Mar 12 14:10:48.593763 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:48.593739 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-z2zhm_e2915633-eccd-4769-8960-a86012fad6da/dns-node-resolver/0.log" Mar 12 14:10:49.081820 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:49.081794 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-rcndk_60c99a96-5455-4303-ab66-b21a59d9c105/node-ca/0.log" Mar 12 14:10:49.093213 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:49.093186 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" event={"ID":"6ade0323-4cc2-497d-b568-72126d4a13e4","Type":"ContainerStarted","Data":"5703ecd320bdeb41a5a42fc544c6751e70679b22891af5c5e49f3135af062fee"} Mar 12 14:10:49.093358 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:49.093225 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" event={"ID":"6ade0323-4cc2-497d-b568-72126d4a13e4","Type":"ContainerStarted","Data":"746d23afb39a6ae6afd60e08c64c86b92ab220c2884d11641001bc3abe297e38"} Mar 12 14:10:49.093358 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:49.093324 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:49.108921 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:49.108875 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" podStartSLOduration=2.108858874 podStartE2EDuration="2.108858874s" podCreationTimestamp="2026-03-12 14:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:10:49.108475663 +0000 UTC m=+1929.883939306" watchObservedRunningTime="2026-03-12 14:10:49.108858874 +0000 UTC m=+1929.884322518" Mar 12 14:10:50.140656 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:50.140605 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-d88pt_a9dd6e04-4e16-4606-8ccd-f7892664a9fa/serve-healthcheck-canary/0.log" Mar 12 14:10:50.644574 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:50.644551 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-gz92z_65fec5eb-81fa-453b-bdc2-6972e50122f8/kube-rbac-proxy/0.log" Mar 12 14:10:50.665347 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:50.665321 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-gz92z_65fec5eb-81fa-453b-bdc2-6972e50122f8/exporter/0.log" Mar 12 14:10:50.685779 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:50.685761 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-gz92z_65fec5eb-81fa-453b-bdc2-6972e50122f8/extractor/0.log" Mar 12 14:10:52.433816 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:52.433787 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-jobset-operator_jobset-controller-manager-84bcf99f68-sf9ff_4477d716-bfdf-4e28-ba49-5ed0459e9647/manager/0.log" Mar 12 14:10:52.458860 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:52.458832 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-jobset-operator_jobset-operator-747c5859c7-m6glh_68b38009-a556-4452-a6df-2a959af71057/jobset-operator/0.log" Mar 12 14:10:55.106490 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:55.106458 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-nssb5/perf-node-gather-daemonset-fcv4l" Mar 12 14:10:57.064130 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:57.064066 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27rkj_9f2e052b-174e-48b3-b2f3-0ccb4fde2d95/kube-multus/0.log" Mar 12 14:10:57.339699 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:57.339641 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qbtlm_7ac5590a-ef07-4cda-8357-78aae27ac5e8/kube-multus-additional-cni-plugins/0.log" Mar 12 14:10:57.362941 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:57.362918 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qbtlm_7ac5590a-ef07-4cda-8357-78aae27ac5e8/egress-router-binary-copy/0.log" Mar 12 14:10:57.382419 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:57.382399 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qbtlm_7ac5590a-ef07-4cda-8357-78aae27ac5e8/cni-plugins/0.log" Mar 12 14:10:57.403134 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:57.403114 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qbtlm_7ac5590a-ef07-4cda-8357-78aae27ac5e8/bond-cni-plugin/0.log" Mar 12 14:10:57.425425 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:57.425410 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qbtlm_7ac5590a-ef07-4cda-8357-78aae27ac5e8/routeoverride-cni/0.log" Mar 12 14:10:57.447588 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:57.447563 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qbtlm_7ac5590a-ef07-4cda-8357-78aae27ac5e8/whereabouts-cni-bincopy/0.log" Mar 12 14:10:57.469322 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:57.469305 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qbtlm_7ac5590a-ef07-4cda-8357-78aae27ac5e8/whereabouts-cni/0.log" Mar 12 14:10:57.776751 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:57.776692 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-md2rq_e4b2741d-b458-4ac7-8509-5475bd034c73/network-metrics-daemon/0.log" Mar 12 14:10:57.795526 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:57.795506 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-md2rq_e4b2741d-b458-4ac7-8509-5475bd034c73/kube-rbac-proxy/0.log" Mar 12 14:10:58.931339 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:58.931312 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9fnd_5acc1851-6633-49b2-88c3-177e3bea26af/ovn-controller/0.log" Mar 12 14:10:58.958037 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:58.958012 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9fnd_5acc1851-6633-49b2-88c3-177e3bea26af/ovn-acl-logging/0.log" Mar 12 14:10:58.975828 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:58.975806 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9fnd_5acc1851-6633-49b2-88c3-177e3bea26af/kube-rbac-proxy-node/0.log" Mar 12 14:10:58.998503 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:58.998471 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9fnd_5acc1851-6633-49b2-88c3-177e3bea26af/kube-rbac-proxy-ovn-metrics/0.log" Mar 12 14:10:59.018720 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:59.018699 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9fnd_5acc1851-6633-49b2-88c3-177e3bea26af/northd/0.log" Mar 12 14:10:59.037381 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:59.037336 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9fnd_5acc1851-6633-49b2-88c3-177e3bea26af/nbdb/0.log" Mar 12 14:10:59.056763 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:59.056746 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9fnd_5acc1851-6633-49b2-88c3-177e3bea26af/sbdb/0.log" Mar 12 14:10:59.160105 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:10:59.160088 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9fnd_5acc1851-6633-49b2-88c3-177e3bea26af/ovnkube-controller/0.log" Mar 12 14:11:00.369269 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:11:00.369243 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-6hlfq_6ae56213-c71d-4f84-b4f2-b7874b87ad3d/network-check-target-container/0.log" Mar 12 14:11:01.366051 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:11:01.366028 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-w2hbj_a778e2cf-6292-41a8-a8e6-44ba43631c82/iptables-alerter/0.log" Mar 12 14:11:02.054473 ip-10-0-142-111 kubenswrapper[2570]: I0312 14:11:02.054445 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-qztxh_0c00809b-e1dd-43f1-a58f-fc0a53b67729/tuned/0.log"