Feb 17 12:44:03.854013 ip-10-0-132-113 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Feb 17 12:44:03.854024 ip-10-0-132-113 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Feb 17 12:44:03.854031 ip-10-0-132-113 systemd[1]: kubelet.service: Failed with result 'resources'. Feb 17 12:44:03.854291 ip-10-0-132-113 systemd[1]: Failed to start Kubernetes Kubelet. Feb 17 12:44:13.910310 ip-10-0-132-113 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Feb 17 12:44:13.910326 ip-10-0-132-113 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot 649e174f2a1e494e8c38392de51ef5ec -- Feb 17 12:46:08.900809 ip-10-0-132-113 systemd[1]: Starting Kubernetes Kubelet... Feb 17 12:46:09.362542 ip-10-0-132-113 kubenswrapper[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 12:46:09.362542 ip-10-0-132-113 kubenswrapper[2562]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 12:46:09.362542 ip-10-0-132-113 kubenswrapper[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 12:46:09.362542 ip-10-0-132-113 kubenswrapper[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 17 12:46:09.362542 ip-10-0-132-113 kubenswrapper[2562]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 12:46:09.364284 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.364148 2562 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 12:46:09.370160 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370138 2562 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 17 12:46:09.370160 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370156 2562 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 17 12:46:09.370160 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370160 2562 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 17 12:46:09.370160 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370163 2562 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370167 2562 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370170 2562 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370174 2562 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370177 2562 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370179 2562 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370182 2562 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370185 2562 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370191 2562 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370195 2562 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370198 2562 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370200 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370203 2562 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370206 2562 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370209 2562 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370212 2562 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370215 2562 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370217 2562 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370220 2562 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370222 2562 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 17 12:46:09.370322 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370226 2562 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370229 2562 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370231 2562 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370234 2562 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370237 2562 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370240 2562 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370242 2562 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370245 2562 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370248 2562 feature_gate.go:328] unrecognized feature gate: Example2 Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370250 2562 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370253 2562 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370256 2562 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370260 2562 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370264 2562 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370268 2562 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370271 2562 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370274 2562 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370277 2562 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370279 2562 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 17 12:46:09.370825 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370282 2562 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370285 2562 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370287 2562 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370290 2562 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370292 2562 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370295 2562 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370297 2562 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370300 2562 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370304 2562 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370306 2562 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370309 2562 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370312 2562 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370314 2562 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370317 2562 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370320 2562 feature_gate.go:328] unrecognized feature gate: Example Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370322 2562 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370325 2562 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370328 2562 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370331 2562 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370333 2562 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 17 12:46:09.371320 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370336 2562 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370339 2562 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370342 2562 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370344 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370346 2562 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370349 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370352 2562 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370356 2562 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370359 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370362 2562 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370364 2562 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370367 2562 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370370 2562 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370372 2562 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370375 2562 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370378 2562 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370382 2562 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370384 2562 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370387 2562 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370390 2562 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 17 12:46:09.371818 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370392 2562 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370395 2562 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370398 2562 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370401 2562 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370805 2562 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370811 2562 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370814 2562 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370818 2562 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370821 2562 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370823 2562 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370827 2562 feature_gate.go:328] unrecognized feature gate: Example2 Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370830 2562 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370832 2562 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370835 2562 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370838 2562 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370841 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370844 2562 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370846 2562 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370849 2562 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370852 2562 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 17 12:46:09.372298 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370855 2562 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370857 2562 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370860 2562 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370863 2562 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370865 2562 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370868 2562 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370870 2562 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370873 2562 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370880 2562 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370884 2562 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370887 2562 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370889 2562 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370892 2562 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370894 2562 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370897 2562 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370899 2562 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370902 2562 feature_gate.go:328] unrecognized feature gate: Example Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370904 2562 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370907 2562 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 17 12:46:09.372793 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370909 2562 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370912 2562 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370914 2562 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370917 2562 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370919 2562 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370922 2562 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370925 2562 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370927 2562 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370930 2562 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370932 2562 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370936 2562 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370939 2562 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370941 2562 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370944 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370947 2562 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370950 2562 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370952 2562 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370955 2562 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370957 2562 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370960 2562 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 17 12:46:09.373262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370962 2562 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370965 2562 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370968 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370970 2562 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370973 2562 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370975 2562 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370978 2562 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370980 2562 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370983 2562 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370985 2562 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370988 2562 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370990 2562 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370993 2562 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370995 2562 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.370998 2562 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371000 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371003 2562 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371005 2562 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371008 2562 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 17 12:46:09.373765 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371011 2562 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371014 2562 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371016 2562 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371019 2562 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371022 2562 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371025 2562 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371030 2562 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371033 2562 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371036 2562 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371039 2562 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371042 2562 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.371045 2562 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372333 2562 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372343 2562 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372350 2562 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372355 2562 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372359 2562 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372363 2562 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372367 2562 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372371 2562 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372374 2562 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 12:46:09.374262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372377 2562 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372381 2562 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372384 2562 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372387 2562 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372390 2562 flags.go:64] FLAG: --cgroup-root="" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372393 2562 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372396 2562 flags.go:64] FLAG: --client-ca-file="" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372399 2562 flags.go:64] FLAG: --cloud-config="" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372401 2562 flags.go:64] FLAG: --cloud-provider="external" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372404 2562 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372408 2562 flags.go:64] FLAG: --cluster-domain="" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372411 2562 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372414 2562 flags.go:64] FLAG: --config-dir="" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372417 2562 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372421 2562 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372424 2562 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372428 2562 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372431 2562 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372434 2562 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372438 2562 flags.go:64] FLAG: --contention-profiling="false" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372440 2562 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372443 2562 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372446 2562 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372449 2562 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372454 2562 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 12:46:09.374793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372457 2562 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372460 2562 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372462 2562 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372483 2562 flags.go:64] FLAG: --enable-server="true" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372488 2562 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372492 2562 flags.go:64] FLAG: --event-burst="100" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372495 2562 flags.go:64] FLAG: --event-qps="50" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372498 2562 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372501 2562 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372504 2562 flags.go:64] FLAG: --eviction-hard="" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372508 2562 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372511 2562 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372514 2562 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372517 2562 flags.go:64] FLAG: --eviction-soft="" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372520 2562 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372523 2562 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372526 2562 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372529 2562 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372532 2562 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372535 2562 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372538 2562 flags.go:64] FLAG: --feature-gates="" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372542 2562 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372544 2562 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372547 2562 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372550 2562 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372553 2562 flags.go:64] FLAG: --healthz-port="10248" Feb 17 12:46:09.375387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372556 2562 flags.go:64] FLAG: --help="false" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372559 2562 flags.go:64] FLAG: --hostname-override="ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372563 2562 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372566 2562 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372568 2562 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372572 2562 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372575 2562 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372578 2562 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372580 2562 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372584 2562 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372587 2562 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372594 2562 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372597 2562 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372600 2562 flags.go:64] FLAG: --kube-reserved="" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372603 2562 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372606 2562 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372609 2562 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372611 2562 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372614 2562 flags.go:64] FLAG: --lock-file="" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372617 2562 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372621 2562 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372624 2562 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372629 2562 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 12:46:09.376065 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372632 2562 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372635 2562 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372638 2562 flags.go:64] FLAG: --logging-format="text" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372641 2562 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372644 2562 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372647 2562 flags.go:64] FLAG: --manifest-url="" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372650 2562 flags.go:64] FLAG: --manifest-url-header="" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372654 2562 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372657 2562 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372661 2562 flags.go:64] FLAG: --max-pods="110" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372664 2562 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372667 2562 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372670 2562 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372673 2562 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372676 2562 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372679 2562 flags.go:64] FLAG: --node-ip="0.0.0.0" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372682 2562 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372689 2562 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372692 2562 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372695 2562 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372700 2562 flags.go:64] FLAG: --pod-cidr="" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372703 2562 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca3deca44439f185f4632d34b1d894f5fa75cccf603cfd634a130c5928811e73" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372709 2562 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372712 2562 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 12:46:09.376626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372715 2562 flags.go:64] FLAG: --pods-per-core="0" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372717 2562 flags.go:64] FLAG: --port="10250" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372720 2562 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372723 2562 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-02f39a5409b226505" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372726 2562 flags.go:64] FLAG: --qos-reserved="" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372730 2562 flags.go:64] FLAG: --read-only-port="10255" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372733 2562 flags.go:64] FLAG: --register-node="true" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372735 2562 flags.go:64] FLAG: --register-schedulable="true" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372738 2562 flags.go:64] FLAG: --register-with-taints="" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372742 2562 flags.go:64] FLAG: --registry-burst="10" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372744 2562 flags.go:64] FLAG: --registry-qps="5" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372747 2562 flags.go:64] FLAG: --reserved-cpus="" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372750 2562 flags.go:64] FLAG: --reserved-memory="" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372754 2562 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372757 2562 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372760 2562 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372763 2562 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372766 2562 flags.go:64] FLAG: --runonce="false" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372769 2562 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372772 2562 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372774 2562 flags.go:64] FLAG: --seccomp-default="false" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372777 2562 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372780 2562 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372783 2562 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372786 2562 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372789 2562 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 12:46:09.377248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372792 2562 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372795 2562 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372799 2562 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372804 2562 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372807 2562 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372810 2562 flags.go:64] FLAG: --system-cgroups="" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372813 2562 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372819 2562 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372822 2562 flags.go:64] FLAG: --tls-cert-file="" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372824 2562 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372828 2562 flags.go:64] FLAG: --tls-min-version="" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372833 2562 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372836 2562 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372838 2562 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372841 2562 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372844 2562 flags.go:64] FLAG: --v="2" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372849 2562 flags.go:64] FLAG: --version="false" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372853 2562 flags.go:64] FLAG: --vmodule="" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372857 2562 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.372860 2562 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372951 2562 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372955 2562 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372958 2562 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372961 2562 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 17 12:46:09.377890 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372964 2562 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372967 2562 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372970 2562 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372972 2562 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372975 2562 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372978 2562 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372980 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372983 2562 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372986 2562 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372989 2562 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372991 2562 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372995 2562 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.372998 2562 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373001 2562 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373004 2562 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373006 2562 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373009 2562 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373011 2562 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373014 2562 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 17 12:46:09.378478 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373018 2562 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373020 2562 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373023 2562 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373025 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373028 2562 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373030 2562 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373033 2562 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373035 2562 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373038 2562 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373040 2562 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373043 2562 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373046 2562 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373048 2562 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373050 2562 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373053 2562 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373056 2562 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373058 2562 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373061 2562 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373063 2562 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373066 2562 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 17 12:46:09.378959 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373068 2562 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373071 2562 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373073 2562 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373075 2562 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373079 2562 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373082 2562 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373084 2562 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373087 2562 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373090 2562 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373092 2562 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373095 2562 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373097 2562 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373101 2562 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373103 2562 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373106 2562 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373108 2562 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373111 2562 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373113 2562 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373116 2562 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373118 2562 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 12:46:09.379452 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373122 2562 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373126 2562 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373129 2562 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373131 2562 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373134 2562 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373137 2562 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373139 2562 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373142 2562 feature_gate.go:328] unrecognized feature gate: Example Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373144 2562 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373147 2562 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373149 2562 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373152 2562 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373154 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373157 2562 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373159 2562 feature_gate.go:328] unrecognized feature gate: Example2 Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373163 2562 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373168 2562 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373171 2562 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373174 2562 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373177 2562 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 17 12:46:09.379958 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373180 2562 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373182 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.373185 2562 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.373191 2562 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.380281 2562 server.go:530] "Kubelet version" kubeletVersion="v1.33.6" Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.380297 2562 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380345 2562 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380350 2562 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380353 2562 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380356 2562 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380359 2562 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380362 2562 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380365 2562 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380368 2562 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380371 2562 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 17 12:46:09.380448 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380373 2562 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380376 2562 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380379 2562 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380381 2562 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380384 2562 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380386 2562 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380389 2562 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380391 2562 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380394 2562 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380396 2562 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380399 2562 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380402 2562 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380405 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380407 2562 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380410 2562 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380413 2562 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380417 2562 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380420 2562 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380423 2562 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380426 2562 feature_gate.go:328] unrecognized feature gate: Example2 Feb 17 12:46:09.380862 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380428 2562 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380431 2562 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380433 2562 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380436 2562 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380438 2562 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380441 2562 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380443 2562 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380446 2562 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380449 2562 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380451 2562 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380454 2562 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380457 2562 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380459 2562 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380462 2562 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380481 2562 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380485 2562 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380488 2562 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380491 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380493 2562 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 17 12:46:09.381339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380496 2562 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380498 2562 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380501 2562 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380504 2562 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380506 2562 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380509 2562 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380511 2562 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380516 2562 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380520 2562 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380523 2562 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380526 2562 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380529 2562 feature_gate.go:328] unrecognized feature gate: Example Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380532 2562 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380535 2562 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380538 2562 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380540 2562 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380543 2562 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380546 2562 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380548 2562 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380551 2562 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 17 12:46:09.381813 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380553 2562 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380556 2562 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380559 2562 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380561 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380563 2562 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380566 2562 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380569 2562 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380571 2562 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380573 2562 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380576 2562 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380578 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380581 2562 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380583 2562 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380586 2562 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380588 2562 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380591 2562 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380595 2562 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 12:46:09.382309 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380597 2562 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.380602 2562 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380703 2562 feature_gate.go:328] unrecognized feature gate: Example2 Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380707 2562 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380710 2562 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380713 2562 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380717 2562 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380720 2562 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380723 2562 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380726 2562 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380729 2562 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380732 2562 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380736 2562 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380739 2562 feature_gate.go:328] unrecognized feature gate: Example Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380742 2562 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 17 12:46:09.382741 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380745 2562 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380748 2562 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380750 2562 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380753 2562 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380756 2562 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380759 2562 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380761 2562 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380764 2562 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380767 2562 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380770 2562 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380772 2562 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380775 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380777 2562 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380780 2562 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380783 2562 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380786 2562 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380788 2562 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380791 2562 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380793 2562 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 17 12:46:09.383114 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380796 2562 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380799 2562 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380801 2562 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380804 2562 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380807 2562 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380810 2562 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380813 2562 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380815 2562 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380818 2562 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380820 2562 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380823 2562 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380826 2562 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380828 2562 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380831 2562 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380833 2562 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380837 2562 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380840 2562 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380843 2562 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380846 2562 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380848 2562 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 17 12:46:09.383672 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380851 2562 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380853 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380856 2562 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380858 2562 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380861 2562 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380863 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380866 2562 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380869 2562 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380872 2562 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380875 2562 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380877 2562 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380880 2562 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380882 2562 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380884 2562 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380887 2562 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380890 2562 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380892 2562 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380895 2562 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380898 2562 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380901 2562 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 17 12:46:09.384151 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380903 2562 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380906 2562 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380908 2562 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380910 2562 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380913 2562 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380915 2562 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380918 2562 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380921 2562 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380923 2562 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380926 2562 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380928 2562 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380931 2562 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380933 2562 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:09.380936 2562 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.380940 2562 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 17 12:46:09.384637 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.381686 2562 server.go:962] "Client rotation is on, will bootstrap in background" Feb 17 12:46:09.384997 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.383645 2562 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 17 12:46:09.384997 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.384559 2562 server.go:1019] "Starting client certificate rotation" Feb 17 12:46:09.384997 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.384654 2562 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 17 12:46:09.384997 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.384688 2562 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 17 12:46:09.410177 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.410160 2562 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 12:46:09.412011 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.411993 2562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 12:46:09.426898 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.426879 2562 log.go:25] "Validated CRI v1 runtime API" Feb 17 12:46:09.432831 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.432815 2562 log.go:25] "Validated CRI v1 image API" Feb 17 12:46:09.434098 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.434075 2562 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 12:46:09.438483 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.438451 2562 fs.go:135] Filesystem UUIDs: map[49b916e9-ae9e-40f3-a9ed-82113eced5af:/dev/nvme0n1p4 6a3a750f-880c-4e7d-92e6-72943f23080f:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2] Feb 17 12:46:09.438558 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.438483 2562 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Feb 17 12:46:09.441389 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.441371 2562 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 17 12:46:09.443542 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.443410 2562 manager.go:217] Machine: {Timestamp:2026-02-17 12:46:09.44209617 +0000 UTC m=+0.416859556 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3094940 MemoryCapacity:33164496896 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2c76a6d1a2ef9cf273d1ded84828fc SystemUUID:ec2c76a6-d1a2-ef9c-f273-d1ded84828fc BootID:649e174f-2a1e-494e-8c38-392de51ef5ec Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632902656 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6090752 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582250496 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:47:65:6b:bb:5b Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:47:65:6b:bb:5b Speed:0 Mtu:9001} {Name:ovs-system MacAddress:0e:e3:00:a0:5f:6a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164496896 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 12:46:09.443542 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.443535 2562 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 12:46:09.443636 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.443608 2562 manager.go:233] Version: {KernelVersion:5.14.0-570.86.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260204-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 12:46:09.445201 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.445178 2562 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 12:46:09.445326 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.445203 2562 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-132-113.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 12:46:09.445371 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.445335 2562 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 12:46:09.445371 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.445343 2562 container_manager_linux.go:306] "Creating device plugin manager" Feb 17 12:46:09.445371 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.445350 2562 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 12:46:09.446151 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.446141 2562 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 12:46:09.447397 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.447387 2562 state_mem.go:36] "Initialized new in-memory state store" Feb 17 12:46:09.447534 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.447525 2562 server.go:1267] "Using root directory" path="/var/lib/kubelet" Feb 17 12:46:09.450310 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.450299 2562 kubelet.go:491] "Attempting to sync node with API server" Feb 17 12:46:09.450343 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.450318 2562 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 12:46:09.450343 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.450330 2562 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 12:46:09.450343 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.450339 2562 kubelet.go:397] "Adding apiserver pod source" Feb 17 12:46:09.450476 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.450351 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 12:46:09.451613 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.451598 2562 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 17 12:46:09.451613 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.451616 2562 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 17 12:46:09.454358 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.454342 2562 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.9-2.rhaos4.20.gitb9ac835.el9" apiVersion="v1" Feb 17 12:46:09.456570 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.456555 2562 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Feb 17 12:46:09.458634 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458620 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 12:46:09.458690 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458638 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 12:46:09.458690 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458643 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 12:46:09.458690 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458656 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 12:46:09.458690 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458662 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 12:46:09.458690 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458668 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 12:46:09.458690 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458674 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 12:46:09.458690 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458680 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 12:46:09.458690 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458686 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 12:46:09.458690 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458692 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 12:46:09.458965 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.458704 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 12:46:09.459174 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.459164 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 12:46:09.460010 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.460000 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 12:46:09.460049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.460012 2562 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Feb 17 12:46:09.462372 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.462353 2562 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-132-113.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 12:46:09.463060 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.463047 2562 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 17 12:46:09.463112 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.463053 2562 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 17 12:46:09.463112 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.463052 2562 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-132-113.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 17 12:46:09.463112 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.463089 2562 server.go:1295] "Started kubelet" Feb 17 12:46:09.463221 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.463154 2562 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 12:46:09.463249 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.463192 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 12:46:09.463279 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.463251 2562 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 17 12:46:09.463784 ip-10-0-132-113 systemd[1]: Started Kubernetes Kubelet. Feb 17 12:46:09.464387 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.464301 2562 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 12:46:09.465861 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.465848 2562 server.go:317] "Adding debug handlers to kubelet server" Feb 17 12:46:09.470803 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.470785 2562 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Feb 17 12:46:09.471824 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.471807 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 12:46:09.473868 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.472637 2562 volume_manager.go:295] "The desired_state_of_world populator starts" Feb 17 12:46:09.474180 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.474169 2562 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 17 12:46:09.474279 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.474230 2562 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-113.ec2.internal\" not found" Feb 17 12:46:09.474339 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.472665 2562 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Feb 17 12:46:09.474435 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.474420 2562 reconstruct.go:97] "Volume reconstruction finished" Feb 17 12:46:09.474513 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.474446 2562 reconciler.go:26] "Reconciler: start to sync state" Feb 17 12:46:09.474702 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.474687 2562 factory.go:55] Registering systemd factory Feb 17 12:46:09.474809 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.474799 2562 factory.go:223] Registration of the systemd container factory successfully Feb 17 12:46:09.475005 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.474989 2562 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 17 12:46:09.475312 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.475291 2562 factory.go:153] Registering CRI-O factory Feb 17 12:46:09.475385 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.475323 2562 factory.go:223] Registration of the crio container factory successfully Feb 17 12:46:09.475385 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.475372 2562 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 12:46:09.475492 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.475394 2562 factory.go:103] Registering Raw factory Feb 17 12:46:09.475492 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.475408 2562 manager.go:1196] Started watching for new ooms in manager Feb 17 12:46:09.476099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.476083 2562 manager.go:319] Starting recovery of all containers Feb 17 12:46:09.479886 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.479441 2562 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 17 12:46:09.479886 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.479510 2562 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-132-113.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 17 12:46:09.480652 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.479574 2562 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-132-113.ec2.internal.18950967b4ff3eb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-132-113.ec2.internal,UID:ip-10-0-132-113.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-132-113.ec2.internal,},FirstTimestamp:2026-02-17 12:46:09.463066292 +0000 UTC m=+0.437829678,LastTimestamp:2026-02-17 12:46:09.463066292 +0000 UTC m=+0.437829678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-132-113.ec2.internal,}" Feb 17 12:46:09.484008 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.483864 2562 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-7f62c" Feb 17 12:46:09.486256 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.486240 2562 manager.go:324] Recovery completed Feb 17 12:46:09.489979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.489966 2562 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 12:46:09.490707 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.490692 2562 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-7f62c" Feb 17 12:46:09.491703 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.491686 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientMemory" Feb 17 12:46:09.491768 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.491715 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasNoDiskPressure" Feb 17 12:46:09.491768 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.491726 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientPID" Feb 17 12:46:09.492233 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.492217 2562 cpu_manager.go:222] "Starting CPU manager" policy="none" Feb 17 12:46:09.492233 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.492232 2562 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Feb 17 12:46:09.492334 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.492246 2562 state_mem.go:36] "Initialized new in-memory state store" Feb 17 12:46:09.493561 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.493489 2562 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-132-113.ec2.internal.18950967b6b4343b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-132-113.ec2.internal,UID:ip-10-0-132-113.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-132-113.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-132-113.ec2.internal,},FirstTimestamp:2026-02-17 12:46:09.491702843 +0000 UTC m=+0.466466229,LastTimestamp:2026-02-17 12:46:09.491702843 +0000 UTC m=+0.466466229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-132-113.ec2.internal,}" Feb 17 12:46:09.495521 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.495505 2562 policy_none.go:49] "None policy: Start" Feb 17 12:46:09.495521 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.495523 2562 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 17 12:46:09.495612 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.495534 2562 state_mem.go:35] "Initializing new in-memory state store" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.535210 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.536268 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.536297 2562 status_manager.go:230] "Starting to sync pod status with apiserver" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.536314 2562 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.536320 2562 kubelet.go:2451] "Starting kubelet main sync loop" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.536352 2562 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.538980 2562 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.540018 2562 manager.go:341] "Starting Device Plugin manager" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.540058 2562 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.540067 2562 server.go:85] "Starting device plugin registration server" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.540257 2562 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.540270 2562 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.540394 2562 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.540482 2562 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.540492 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.540928 2562 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Feb 17 12:46:09.552099 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.540960 2562 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-132-113.ec2.internal\" not found" Feb 17 12:46:09.637483 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.637407 2562 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal"] Feb 17 12:46:09.637575 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.637496 2562 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 12:46:09.638317 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.638303 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientMemory" Feb 17 12:46:09.638362 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.638332 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasNoDiskPressure" Feb 17 12:46:09.638362 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.638343 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientPID" Feb 17 12:46:09.640575 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.640555 2562 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 12:46:09.641240 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.641224 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientMemory" Feb 17 12:46:09.641331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.641254 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasNoDiskPressure" Feb 17 12:46:09.641331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.641269 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientPID" Feb 17 12:46:09.641331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.641291 2562 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.641442 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.641335 2562 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 12:46:09.641547 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.641530 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.641594 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.641569 2562 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 12:46:09.641993 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.641967 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientMemory" Feb 17 12:46:09.641993 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.641993 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasNoDiskPressure" Feb 17 12:46:09.642093 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.642008 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientPID" Feb 17 12:46:09.642093 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.642029 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientMemory" Feb 17 12:46:09.642093 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.642046 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasNoDiskPressure" Feb 17 12:46:09.642093 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.642058 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientPID" Feb 17 12:46:09.644138 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.644126 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.644200 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.644148 2562 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 12:46:09.644875 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.644859 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientMemory" Feb 17 12:46:09.644969 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.644887 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasNoDiskPressure" Feb 17 12:46:09.644969 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.644902 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeHasSufficientPID" Feb 17 12:46:09.650418 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.650402 2562 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.650500 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.650421 2562 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-132-113.ec2.internal\": node \"ip-10-0-132-113.ec2.internal\" not found" Feb 17 12:46:09.667625 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.667608 2562 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-113.ec2.internal\" not found" Feb 17 12:46:09.670018 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.670002 2562 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-132-113.ec2.internal\" not found" node="ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.674316 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.674300 2562 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-132-113.ec2.internal\" not found" node="ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.767980 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.767963 2562 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-113.ec2.internal\" not found" Feb 17 12:46:09.776389 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.776365 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/789ba31dc9b07674daaa230ccc26ae9e-config\") pod \"kube-apiserver-proxy-ip-10-0-132-113.ec2.internal\" (UID: \"789ba31dc9b07674daaa230ccc26ae9e\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.776488 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.776396 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/6ab42bf9151db011cb6080c20a9ad910-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal\" (UID: \"6ab42bf9151db011cb6080c20a9ad910\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.776488 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.776432 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ab42bf9151db011cb6080c20a9ad910-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal\" (UID: \"6ab42bf9151db011cb6080c20a9ad910\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.868509 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.868490 2562 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-113.ec2.internal\" not found" Feb 17 12:46:09.876938 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.876921 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/6ab42bf9151db011cb6080c20a9ad910-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal\" (UID: \"6ab42bf9151db011cb6080c20a9ad910\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.876998 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.876948 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ab42bf9151db011cb6080c20a9ad910-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal\" (UID: \"6ab42bf9151db011cb6080c20a9ad910\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.876998 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.876965 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/789ba31dc9b07674daaa230ccc26ae9e-config\") pod \"kube-apiserver-proxy-ip-10-0-132-113.ec2.internal\" (UID: \"789ba31dc9b07674daaa230ccc26ae9e\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.877424 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.877411 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/789ba31dc9b07674daaa230ccc26ae9e-config\") pod \"kube-apiserver-proxy-ip-10-0-132-113.ec2.internal\" (UID: \"789ba31dc9b07674daaa230ccc26ae9e\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.877424 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.877414 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/6ab42bf9151db011cb6080c20a9ad910-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal\" (UID: \"6ab42bf9151db011cb6080c20a9ad910\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.877519 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.877422 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ab42bf9151db011cb6080c20a9ad910-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal\" (UID: \"6ab42bf9151db011cb6080c20a9ad910\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.969399 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:09.969327 2562 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-113.ec2.internal\" not found" Feb 17 12:46:09.972504 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.972488 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" Feb 17 12:46:09.978553 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:09.978536 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal" Feb 17 12:46:10.070069 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.070042 2562 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-113.ec2.internal\" not found" Feb 17 12:46:10.170664 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.170635 2562 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-113.ec2.internal\" not found" Feb 17 12:46:10.271310 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.271233 2562 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-113.ec2.internal\" not found" Feb 17 12:46:10.283897 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.283875 2562 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 12:46:10.374537 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.374517 2562 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" Feb 17 12:46:10.384587 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.384569 2562 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 12:46:10.384706 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.384684 2562 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Feb 17 12:46:10.384808 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.384731 2562 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Feb 17 12:46:10.384808 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.384753 2562 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://ad5c769560dbc4df58c817ef7bca10cb-9983387803a85869.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/openshift-machine-config-operator/pods\": read tcp 10.0.132.113:40078->18.208.235.50:6443: use of closed network connection" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" Feb 17 12:46:10.384808 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.384782 2562 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal" Feb 17 12:46:10.397800 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.397781 2562 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Feb 17 12:46:10.451080 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.451052 2562 apiserver.go:52] "Watching apiserver" Feb 17 12:46:10.457775 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.457755 2562 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 17 12:46:10.459537 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.459515 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf","openshift-cluster-node-tuning-operator/tuned-l2dwp","openshift-image-registry/node-ca-7rjvn","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal","openshift-multus/multus-g5h5b","openshift-network-diagnostics/network-check-target-v7bfv","openshift-dns/node-resolver-h4hcc","openshift-multus/multus-additional-cni-plugins-tsksj","openshift-multus/network-metrics-daemon-j6r5z","openshift-network-operator/iptables-alerter-wkhkn","openshift-ovn-kubernetes/ovnkube-node-mhj7q","kube-system/konnectivity-agent-4cf5r","kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal"] Feb 17 12:46:10.461895 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.461876 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.463992 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.463975 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.464380 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.464365 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 17 12:46:10.464432 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.464391 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 17 12:46:10.464432 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.464410 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 17 12:46:10.464573 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.464556 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 17 12:46:10.464635 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.464612 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 17 12:46:10.464689 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.464657 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-bpl4q\"" Feb 17 12:46:10.464734 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.464715 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 17 12:46:10.467312 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.466413 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.467312 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.466672 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Feb 17 12:46:10.467454 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.467329 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-rs2lg\"" Feb 17 12:46:10.467454 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.467346 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Feb 17 12:46:10.469211 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.469013 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 17 12:46:10.469211 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.469060 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.469211 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.469095 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 17 12:46:10.469211 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.469105 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 17 12:46:10.469406 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.469355 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-nj8cg\"" Feb 17 12:46:10.470892 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.470874 2562 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 17 12:46:10.471388 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.471372 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Feb 17 12:46:10.471492 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.471375 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Feb 17 12:46:10.471595 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.471580 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-dhnfh\"" Feb 17 12:46:10.471674 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.471626 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Feb 17 12:46:10.471728 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.471717 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.473924 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.473909 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:10.474018 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.473998 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:10.474254 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.474169 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-nqdzj\"" Feb 17 12:46:10.474254 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.474177 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 17 12:46:10.474552 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.474279 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 17 12:46:10.474552 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.474365 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 17 12:46:10.474552 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.474417 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 17 12:46:10.476444 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.476427 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.478605 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.478588 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 17 12:46:10.478817 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.478621 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-rh78h\"" Feb 17 12:46:10.478817 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.478622 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 17 12:46:10.478817 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.478722 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.480648 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480627 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-run-openvswitch\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.480732 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480662 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.480732 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480686 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7282992d-70a3-40ce-96b1-03529732a700-env-overrides\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.480732 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480712 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d9f14b23-72e9-4631-b0c5-d568aca52c29-hosts-file\") pod \"node-resolver-h4hcc\" (UID: \"d9f14b23-72e9-4631-b0c5-d568aca52c29\") " pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.480864 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480763 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-var-lib-cni-multus\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.480864 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480797 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-systemd-units\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.480864 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480826 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-node-log\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.480864 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480852 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d9f14b23-72e9-4631-b0c5-d568aca52c29-tmp-dir\") pod \"node-resolver-h4hcc\" (UID: \"d9f14b23-72e9-4631-b0c5-d568aca52c29\") " pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.481049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480874 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-modprobe-d\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480896 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-lib-modules\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480922 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-host\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480899 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 17 12:46:10.481049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.480956 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-cnibin\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.481049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481009 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9x7z\" (UniqueName: \"kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z\") pod \"network-check-target-v7bfv\" (UID: \"89c079c8-3da1-4b99-a30a-3d749cf8f842\") " pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:10.481049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481043 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4d5674be-49b1-49d0-b07d-656b923994f0-tmp\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481070 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-registration-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481096 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9f08688a-aac4-4adb-b2bd-90ffe60387e3-serviceca\") pod \"node-ca-7rjvn\" (UID: \"9f08688a-aac4-4adb-b2bd-90ffe60387e3\") " pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481134 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-sysctl-d\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481162 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-sys\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481194 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-qdrgb\"" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481210 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlnp\" (UniqueName: \"kubernetes.io/projected/b6d826a0-9201-49f4-8abb-ecf10f525a7e-kube-api-access-qmlnp\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481225 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481243 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4d5674be-49b1-49d0-b07d-656b923994f0-etc-tuned\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481269 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-kubelet-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481287 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-conf-dir\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481301 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-daemon-config\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481315 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-run\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481370 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481367 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-os-release\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481395 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481402 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-var-lib-cni-bin\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481431 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-sysconfig\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.481453 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481454 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-cni-dir\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481509 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-system-cni-dir\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481551 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-systemd\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481585 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-var-lib-kubelet\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481602 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-etc-openvswitch\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481620 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7282992d-70a3-40ce-96b1-03529732a700-ovn-node-metrics-cert\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481644 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-sysctl-conf\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481674 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-etc-kubernetes\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481689 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-slash\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481713 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-log-socket\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481729 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-cni-netd\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481742 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7282992d-70a3-40ce-96b1-03529732a700-ovnkube-script-lib\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.481979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481769 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw47k\" (UniqueName: \"kubernetes.io/projected/4d5674be-49b1-49d0-b07d-656b923994f0-kube-api-access-gw47k\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481791 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-device-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481815 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-sys-fs\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481855 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-run-k8s-cni-cncf-io\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481885 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-run-netns\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481911 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-kubelet\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481930 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-run-systemd\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481944 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7282992d-70a3-40ce-96b1-03529732a700-ovnkube-config\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481968 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmct2\" (UniqueName: \"kubernetes.io/projected/d9f14b23-72e9-4631-b0c5-d568aca52c29-kube-api-access-kmct2\") pod \"node-resolver-h4hcc\" (UID: \"d9f14b23-72e9-4631-b0c5-d568aca52c29\") " pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.481995 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-etc-selinux\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482014 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6d826a0-9201-49f4-8abb-ecf10f525a7e-cni-binary-copy\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482028 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-run-multus-certs\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482058 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-run-ovn-kubernetes\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482081 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvzpp\" (UniqueName: \"kubernetes.io/projected/7282992d-70a3-40ce-96b1-03529732a700-kube-api-access-wvzpp\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482096 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-var-lib-openvswitch\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482125 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-cni-bin\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.482772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482144 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-run-netns\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.483275 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482158 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-kubernetes\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.483275 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482172 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-var-lib-kubelet\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.483275 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482192 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sps5\" (UniqueName: \"kubernetes.io/projected/47c476ed-1111-4f04-920f-dc8a70a378a0-kube-api-access-7sps5\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.483275 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482216 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-socket-dir-parent\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.483275 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482229 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-hostroot\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.483275 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482243 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-run-ovn\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.483275 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482280 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f08688a-aac4-4adb-b2bd-90ffe60387e3-host\") pod \"node-ca-7rjvn\" (UID: \"9f08688a-aac4-4adb-b2bd-90ffe60387e3\") " pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.483275 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482304 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nptms\" (UniqueName: \"kubernetes.io/projected/9f08688a-aac4-4adb-b2bd-90ffe60387e3-kube-api-access-nptms\") pod \"node-ca-7rjvn\" (UID: \"9f08688a-aac4-4adb-b2bd-90ffe60387e3\") " pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.483275 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.482324 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-socket-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.483688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.483495 2562 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 17 12:46:10.483914 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.483899 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:10.485979 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.485963 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Feb 17 12:46:10.486043 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.485991 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-9k8j8\"" Feb 17 12:46:10.486161 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.486149 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Feb 17 12:46:10.486248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.486233 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.488491 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.488459 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 17 12:46:10.488558 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.488490 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 17 12:46:10.488558 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.488545 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-2qnhh\"" Feb 17 12:46:10.489053 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.489040 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 17 12:46:10.492007 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.491984 2562 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-02-17 12:41:09 +0000 UTC" deadline="2027-09-17 07:53:18.554479321 +0000 UTC" Feb 17 12:46:10.492007 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.492004 2562 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="13843h7m8.062476875s" Feb 17 12:46:10.496961 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.496934 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod789ba31dc9b07674daaa230ccc26ae9e.slice/crio-d06fd425922cd8899fdea704f619eaa766a39de72e73317906303c45b8b2874d WatchSource:0}: Error finding container d06fd425922cd8899fdea704f619eaa766a39de72e73317906303c45b8b2874d: Status 404 returned error can't find the container with id d06fd425922cd8899fdea704f619eaa766a39de72e73317906303c45b8b2874d Feb 17 12:46:10.497238 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.497218 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ab42bf9151db011cb6080c20a9ad910.slice/crio-f766e9368c995670fe0076cda9c54e1e2a63a94bfc75477ac0c52a9d7c038ac3 WatchSource:0}: Error finding container f766e9368c995670fe0076cda9c54e1e2a63a94bfc75477ac0c52a9d7c038ac3: Status 404 returned error can't find the container with id f766e9368c995670fe0076cda9c54e1e2a63a94bfc75477ac0c52a9d7c038ac3 Feb 17 12:46:10.500519 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.500503 2562 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 12:46:10.506657 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.506635 2562 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-hh6mk" Feb 17 12:46:10.515578 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.515561 2562 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-hh6mk" Feb 17 12:46:10.538954 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.538917 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" event={"ID":"6ab42bf9151db011cb6080c20a9ad910","Type":"ContainerStarted","Data":"f766e9368c995670fe0076cda9c54e1e2a63a94bfc75477ac0c52a9d7c038ac3"} Feb 17 12:46:10.539908 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.539883 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal" event={"ID":"789ba31dc9b07674daaa230ccc26ae9e","Type":"ContainerStarted","Data":"d06fd425922cd8899fdea704f619eaa766a39de72e73317906303c45b8b2874d"} Feb 17 12:46:10.575627 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.575607 2562 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 17 12:46:10.582918 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.582897 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d9f14b23-72e9-4631-b0c5-d568aca52c29-tmp-dir\") pod \"node-resolver-h4hcc\" (UID: \"d9f14b23-72e9-4631-b0c5-d568aca52c29\") " pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.582981 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.582928 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-modprobe-d\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.582981 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.582943 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-lib-modules\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.582981 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.582975 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-host\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583080 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583025 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-host\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583080 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583021 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-cnibin\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.583080 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583060 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-modprobe-d\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583080 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583067 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9x7z\" (UniqueName: \"kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z\") pod \"network-check-target-v7bfv\" (UID: \"89c079c8-3da1-4b99-a30a-3d749cf8f842\") " pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:10.583253 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583078 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-cnibin\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.583253 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583098 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4d5674be-49b1-49d0-b07d-656b923994f0-tmp\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583253 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583123 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-registration-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.583253 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583156 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/26e5edd9-76b4-4050-a8bc-3e88e1993210-agent-certs\") pod \"konnectivity-agent-4cf5r\" (UID: \"26e5edd9-76b4-4050-a8bc-3e88e1993210\") " pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:10.583253 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583181 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9f08688a-aac4-4adb-b2bd-90ffe60387e3-serviceca\") pod \"node-ca-7rjvn\" (UID: \"9f08688a-aac4-4adb-b2bd-90ffe60387e3\") " pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.583253 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583204 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-sysctl-d\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583253 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583215 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-registration-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.583253 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583228 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-sys\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583253 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583229 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-lib-modules\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583253 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583255 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qmlnp\" (UniqueName: \"kubernetes.io/projected/b6d826a0-9201-49f4-8abb-ecf10f525a7e-kube-api-access-qmlnp\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583280 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583305 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-os-release\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583316 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d9f14b23-72e9-4631-b0c5-d568aca52c29-tmp-dir\") pod \"node-resolver-h4hcc\" (UID: \"d9f14b23-72e9-4631-b0c5-d568aca52c29\") " pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583331 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583319 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-sys\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583353 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-sysctl-d\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583383 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4d5674be-49b1-49d0-b07d-656b923994f0-etc-tuned\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583412 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-kubelet-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583440 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-conf-dir\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583484 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-daemon-config\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583452 2562 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583525 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-conf-dir\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583557 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c8dq\" (UniqueName: \"kubernetes.io/projected/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-kube-api-access-4c8dq\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583578 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-kubelet-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583611 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-run\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583627 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-os-release\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583632 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9f08688a-aac4-4adb-b2bd-90ffe60387e3-serviceca\") pod \"node-ca-7rjvn\" (UID: \"9f08688a-aac4-4adb-b2bd-90ffe60387e3\") " pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.583709 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583668 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-run\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583678 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-var-lib-cni-bin\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583697 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-var-lib-cni-bin\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583704 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-os-release\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583716 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n7nb\" (UniqueName: \"kubernetes.io/projected/5c5191ba-7e5f-4e80-8ce5-eac97ea608dc-kube-api-access-5n7nb\") pod \"iptables-alerter-wkhkn\" (UID: \"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc\") " pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583743 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-cni-binary-copy\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583767 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583792 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-sysconfig\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583818 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-cni-dir\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583842 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5c5191ba-7e5f-4e80-8ce5-eac97ea608dc-host-slash\") pod \"iptables-alerter-wkhkn\" (UID: \"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc\") " pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583855 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-sysconfig\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583903 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-system-cni-dir\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583919 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-cni-dir\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583936 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-system-cni-dir\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583937 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-systemd\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583969 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-var-lib-kubelet\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583977 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-daemon-config\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.584541 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583987 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-systemd\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.583993 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-etc-openvswitch\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584019 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7282992d-70a3-40ce-96b1-03529732a700-ovn-node-metrics-cert\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584027 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-var-lib-kubelet\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584037 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn4wv\" (UniqueName: \"kubernetes.io/projected/7bb13ada-375a-497d-aced-02307525f449-kube-api-access-tn4wv\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584056 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-sysctl-conf\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584065 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-etc-openvswitch\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584089 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-etc-kubernetes\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584107 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-slash\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584122 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-log-socket\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584137 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-cni-netd\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584156 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-sysctl-conf\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584175 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-log-socket\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584173 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-slash\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584192 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-etc-kubernetes\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584199 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7282992d-70a3-40ce-96b1-03529732a700-ovnkube-script-lib\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584200 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-cni-netd\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.585313 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584220 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gw47k\" (UniqueName: \"kubernetes.io/projected/4d5674be-49b1-49d0-b07d-656b923994f0-kube-api-access-gw47k\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584258 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-device-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584279 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-sys-fs\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584316 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-run-k8s-cni-cncf-io\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584343 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-run-netns\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584364 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-kubelet\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584387 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-run-systemd\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584406 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-device-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584409 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7282992d-70a3-40ce-96b1-03529732a700-ovnkube-config\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584448 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kmct2\" (UniqueName: \"kubernetes.io/projected/d9f14b23-72e9-4631-b0c5-d568aca52c29-kube-api-access-kmct2\") pod \"node-resolver-h4hcc\" (UID: \"d9f14b23-72e9-4631-b0c5-d568aca52c29\") " pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584490 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-etc-selinux\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584513 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6d826a0-9201-49f4-8abb-ecf10f525a7e-cni-binary-copy\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584533 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-run-multus-certs\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584561 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-run-ovn-kubernetes\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584584 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvzpp\" (UniqueName: \"kubernetes.io/projected/7282992d-70a3-40ce-96b1-03529732a700-kube-api-access-wvzpp\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584607 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-system-cni-dir\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584631 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-var-lib-openvswitch\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584653 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-cni-bin\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584694 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-cnibin\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584741 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-run-netns\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584780 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-run-netns\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584826 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-run-netns\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584854 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-sys-fs\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584848 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7282992d-70a3-40ce-96b1-03529732a700-ovnkube-config\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584876 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-run-k8s-cni-cncf-io\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584903 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-kubernetes\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584917 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-run-ovn-kubernetes\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584928 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-run-systemd\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584938 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-var-lib-kubelet\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584970 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7sps5\" (UniqueName: \"kubernetes.io/projected/47c476ed-1111-4f04-920f-dc8a70a378a0-kube-api-access-7sps5\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584996 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-etc-selinux\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585016 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-kubelet\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584997 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-socket-dir-parent\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585054 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-hostroot\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.586753 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585061 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-cni-bin\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585061 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-multus-socket-dir-parent\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585096 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-run-ovn\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585118 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-var-lib-kubelet\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585104 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-var-lib-openvswitch\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.584970 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d5674be-49b1-49d0-b07d-656b923994f0-etc-kubernetes\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585138 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/26e5edd9-76b4-4050-a8bc-3e88e1993210-konnectivity-ca\") pod \"konnectivity-agent-4cf5r\" (UID: \"26e5edd9-76b4-4050-a8bc-3e88e1993210\") " pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585169 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-hostroot\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585176 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-run-ovn\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585206 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f08688a-aac4-4adb-b2bd-90ffe60387e3-host\") pod \"node-ca-7rjvn\" (UID: \"9f08688a-aac4-4adb-b2bd-90ffe60387e3\") " pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585221 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-run-multus-certs\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585271 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nptms\" (UniqueName: \"kubernetes.io/projected/9f08688a-aac4-4adb-b2bd-90ffe60387e3-kube-api-access-nptms\") pod \"node-ca-7rjvn\" (UID: \"9f08688a-aac4-4adb-b2bd-90ffe60387e3\") " pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585274 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f08688a-aac4-4adb-b2bd-90ffe60387e3-host\") pod \"node-ca-7rjvn\" (UID: \"9f08688a-aac4-4adb-b2bd-90ffe60387e3\") " pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585332 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-socket-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585360 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-run-openvswitch\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585400 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585427 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7282992d-70a3-40ce-96b1-03529732a700-env-overrides\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585428 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7282992d-70a3-40ce-96b1-03529732a700-ovnkube-script-lib\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585483 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/5c5191ba-7e5f-4e80-8ce5-eac97ea608dc-iptables-alerter-script\") pod \"iptables-alerter-wkhkn\" (UID: \"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc\") " pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585510 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-run-openvswitch\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585515 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585544 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d9f14b23-72e9-4631-b0c5-d568aca52c29-hosts-file\") pod \"node-resolver-h4hcc\" (UID: \"d9f14b23-72e9-4631-b0c5-d568aca52c29\") " pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585559 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6d826a0-9201-49f4-8abb-ecf10f525a7e-cni-binary-copy\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585567 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-var-lib-cni-multus\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585590 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-systemd-units\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585617 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47c476ed-1111-4f04-920f-dc8a70a378a0-socket-dir\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585628 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b6d826a0-9201-49f4-8abb-ecf10f525a7e-host-var-lib-cni-multus\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585638 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d9f14b23-72e9-4631-b0c5-d568aca52c29-hosts-file\") pod \"node-resolver-h4hcc\" (UID: \"d9f14b23-72e9-4631-b0c5-d568aca52c29\") " pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585657 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-node-log\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585668 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585680 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-systemd-units\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585718 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7282992d-70a3-40ce-96b1-03529732a700-node-log\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.585832 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7282992d-70a3-40ce-96b1-03529732a700-env-overrides\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.586621 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4d5674be-49b1-49d0-b07d-656b923994f0-tmp\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.587688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.586676 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4d5674be-49b1-49d0-b07d-656b923994f0-etc-tuned\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.588179 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.586817 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7282992d-70a3-40ce-96b1-03529732a700-ovn-node-metrics-cert\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.589169 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.589154 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 12:46:10.589214 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.589171 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 12:46:10.589214 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.589181 2562 projected.go:194] Error preparing data for projected volume kube-api-access-w9x7z for pod openshift-network-diagnostics/network-check-target-v7bfv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:10.589279 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.589245 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z podName:89c079c8-3da1-4b99-a30a-3d749cf8f842 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:11.089211432 +0000 UTC m=+2.063974805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w9x7z" (UniqueName: "kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z") pod "network-check-target-v7bfv" (UID: "89c079c8-3da1-4b99-a30a-3d749cf8f842") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:10.590887 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.590871 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmlnp\" (UniqueName: \"kubernetes.io/projected/b6d826a0-9201-49f4-8abb-ecf10f525a7e-kube-api-access-qmlnp\") pod \"multus-g5h5b\" (UID: \"b6d826a0-9201-49f4-8abb-ecf10f525a7e\") " pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.592506 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.592485 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw47k\" (UniqueName: \"kubernetes.io/projected/4d5674be-49b1-49d0-b07d-656b923994f0-kube-api-access-gw47k\") pod \"tuned-l2dwp\" (UID: \"4d5674be-49b1-49d0-b07d-656b923994f0\") " pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.592804 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.592788 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvzpp\" (UniqueName: \"kubernetes.io/projected/7282992d-70a3-40ce-96b1-03529732a700-kube-api-access-wvzpp\") pod \"ovnkube-node-mhj7q\" (UID: \"7282992d-70a3-40ce-96b1-03529732a700\") " pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.592874 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.592860 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nptms\" (UniqueName: \"kubernetes.io/projected/9f08688a-aac4-4adb-b2bd-90ffe60387e3-kube-api-access-nptms\") pod \"node-ca-7rjvn\" (UID: \"9f08688a-aac4-4adb-b2bd-90ffe60387e3\") " pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.593591 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.593565 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmct2\" (UniqueName: \"kubernetes.io/projected/d9f14b23-72e9-4631-b0c5-d568aca52c29-kube-api-access-kmct2\") pod \"node-resolver-h4hcc\" (UID: \"d9f14b23-72e9-4631-b0c5-d568aca52c29\") " pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.593665 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.593571 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sps5\" (UniqueName: \"kubernetes.io/projected/47c476ed-1111-4f04-920f-dc8a70a378a0-kube-api-access-7sps5\") pod \"aws-ebs-csi-driver-node-6mljf\" (UID: \"47c476ed-1111-4f04-920f-dc8a70a378a0\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.686796 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.686772 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:10.686796 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.686801 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-os-release\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.686983 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.686818 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.686983 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.686840 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4c8dq\" (UniqueName: \"kubernetes.io/projected/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-kube-api-access-4c8dq\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.686983 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.686866 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5n7nb\" (UniqueName: \"kubernetes.io/projected/5c5191ba-7e5f-4e80-8ce5-eac97ea608dc-kube-api-access-5n7nb\") pod \"iptables-alerter-wkhkn\" (UID: \"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc\") " pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.686983 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.686898 2562 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:10.686983 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.686914 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-os-release\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.686983 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.686953 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-cni-binary-copy\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.686983 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:10.686961 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs podName:7bb13ada-375a-497d-aced-02307525f449 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:11.186945544 +0000 UTC m=+2.161708918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs") pod "network-metrics-daemon-j6r5z" (UID: "7bb13ada-375a-497d-aced-02307525f449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.686988 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687014 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5c5191ba-7e5f-4e80-8ce5-eac97ea608dc-host-slash\") pod \"iptables-alerter-wkhkn\" (UID: \"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc\") " pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687043 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tn4wv\" (UniqueName: \"kubernetes.io/projected/7bb13ada-375a-497d-aced-02307525f449-kube-api-access-tn4wv\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687082 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-system-cni-dir\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687107 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-cnibin\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687148 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5c5191ba-7e5f-4e80-8ce5-eac97ea608dc-host-slash\") pod \"iptables-alerter-wkhkn\" (UID: \"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc\") " pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687161 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-system-cni-dir\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687179 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/26e5edd9-76b4-4050-a8bc-3e88e1993210-konnectivity-ca\") pod \"konnectivity-agent-4cf5r\" (UID: \"26e5edd9-76b4-4050-a8bc-3e88e1993210\") " pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687214 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-cnibin\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687224 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/5c5191ba-7e5f-4e80-8ce5-eac97ea608dc-iptables-alerter-script\") pod \"iptables-alerter-wkhkn\" (UID: \"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc\") " pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687257 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.687331 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687307 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/26e5edd9-76b4-4050-a8bc-3e88e1993210-agent-certs\") pod \"konnectivity-agent-4cf5r\" (UID: \"26e5edd9-76b4-4050-a8bc-3e88e1993210\") " pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:10.687830 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687513 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.687830 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687538 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-cni-binary-copy\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.687830 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687663 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.687830 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687723 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.687830 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687765 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/26e5edd9-76b4-4050-a8bc-3e88e1993210-konnectivity-ca\") pod \"konnectivity-agent-4cf5r\" (UID: \"26e5edd9-76b4-4050-a8bc-3e88e1993210\") " pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:10.687830 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.687765 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/5c5191ba-7e5f-4e80-8ce5-eac97ea608dc-iptables-alerter-script\") pod \"iptables-alerter-wkhkn\" (UID: \"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc\") " pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.689353 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.689336 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/26e5edd9-76b4-4050-a8bc-3e88e1993210-agent-certs\") pod \"konnectivity-agent-4cf5r\" (UID: \"26e5edd9-76b4-4050-a8bc-3e88e1993210\") " pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:10.694997 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.694978 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c8dq\" (UniqueName: \"kubernetes.io/projected/ea2f2f4d-70c4-4bc8-9640-5b18d2e41173-kube-api-access-4c8dq\") pod \"multus-additional-cni-plugins-tsksj\" (UID: \"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173\") " pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.694997 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.694984 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn4wv\" (UniqueName: \"kubernetes.io/projected/7bb13ada-375a-497d-aced-02307525f449-kube-api-access-tn4wv\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:10.695218 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.695203 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n7nb\" (UniqueName: \"kubernetes.io/projected/5c5191ba-7e5f-4e80-8ce5-eac97ea608dc-kube-api-access-5n7nb\") pod \"iptables-alerter-wkhkn\" (UID: \"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc\") " pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.768126 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.768102 2562 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 12:46:10.783337 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.783289 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:10.790242 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.790217 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7282992d_70a3_40ce_96b1_03529732a700.slice/crio-5a7f9dfa7c30a1afe3617557f80b7c9bae2f34629dde6122928c64a2705a00de WatchSource:0}: Error finding container 5a7f9dfa7c30a1afe3617557f80b7c9bae2f34629dde6122928c64a2705a00de: Status 404 returned error can't find the container with id 5a7f9dfa7c30a1afe3617557f80b7c9bae2f34629dde6122928c64a2705a00de Feb 17 12:46:10.807845 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.807826 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" Feb 17 12:46:10.812395 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.812372 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7rjvn" Feb 17 12:46:10.813957 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.813935 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d5674be_49b1_49d0_b07d_656b923994f0.slice/crio-d0681b9f3ba609cee4e02d3c452932c4facdfbdeb6b98769e91a6060e3eee44e WatchSource:0}: Error finding container d0681b9f3ba609cee4e02d3c452932c4facdfbdeb6b98769e91a6060e3eee44e: Status 404 returned error can't find the container with id d0681b9f3ba609cee4e02d3c452932c4facdfbdeb6b98769e91a6060e3eee44e Feb 17 12:46:10.816522 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.816500 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" Feb 17 12:46:10.820045 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.820018 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f08688a_aac4_4adb_b2bd_90ffe60387e3.slice/crio-7267dd178b1728dbd576dc2271b6a2adc7968d9ff833dbb85cfe968c07e5c428 WatchSource:0}: Error finding container 7267dd178b1728dbd576dc2271b6a2adc7968d9ff833dbb85cfe968c07e5c428: Status 404 returned error can't find the container with id 7267dd178b1728dbd576dc2271b6a2adc7968d9ff833dbb85cfe968c07e5c428 Feb 17 12:46:10.821596 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.821565 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g5h5b" Feb 17 12:46:10.827927 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.827765 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-h4hcc" Feb 17 12:46:10.828553 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.828531 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47c476ed_1111_4f04_920f_dc8a70a378a0.slice/crio-39984778977c0a88279bc58db93bbc827035c17b568ed4090b2683a518c6d1f2 WatchSource:0}: Error finding container 39984778977c0a88279bc58db93bbc827035c17b568ed4090b2683a518c6d1f2: Status 404 returned error can't find the container with id 39984778977c0a88279bc58db93bbc827035c17b568ed4090b2683a518c6d1f2 Feb 17 12:46:10.830323 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.830265 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6d826a0_9201_49f4_8abb_ecf10f525a7e.slice/crio-ae5192027990ee43530d8fcb3e9b0f578f1c4e5f8ed3243c8133b0409a0df6e1 WatchSource:0}: Error finding container ae5192027990ee43530d8fcb3e9b0f578f1c4e5f8ed3243c8133b0409a0df6e1: Status 404 returned error can't find the container with id ae5192027990ee43530d8fcb3e9b0f578f1c4e5f8ed3243c8133b0409a0df6e1 Feb 17 12:46:10.832192 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.832173 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tsksj" Feb 17 12:46:10.835780 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.835653 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9f14b23_72e9_4631_b0c5_d568aca52c29.slice/crio-a54bc5afb0f269eb5b4c306d623c84d2de421f856c3e1814be41d8ca676661d0 WatchSource:0}: Error finding container a54bc5afb0f269eb5b4c306d623c84d2de421f856c3e1814be41d8ca676661d0: Status 404 returned error can't find the container with id a54bc5afb0f269eb5b4c306d623c84d2de421f856c3e1814be41d8ca676661d0 Feb 17 12:46:10.837557 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.837540 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:10.839947 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.839911 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea2f2f4d_70c4_4bc8_9640_5b18d2e41173.slice/crio-e516c142188c6fc1d6eb86a88f488b7e206337fcbad373dfcfacf2f3fced6204 WatchSource:0}: Error finding container e516c142188c6fc1d6eb86a88f488b7e206337fcbad373dfcfacf2f3fced6204: Status 404 returned error can't find the container with id e516c142188c6fc1d6eb86a88f488b7e206337fcbad373dfcfacf2f3fced6204 Feb 17 12:46:10.843500 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.843398 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wkhkn" Feb 17 12:46:10.844339 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.844319 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26e5edd9_76b4_4050_a8bc_3e88e1993210.slice/crio-7dc367c90ff07dac75849f2123ead74aa0c1094166a1c8df83493e08e15a990d WatchSource:0}: Error finding container 7dc367c90ff07dac75849f2123ead74aa0c1094166a1c8df83493e08e15a990d: Status 404 returned error can't find the container with id 7dc367c90ff07dac75849f2123ead74aa0c1094166a1c8df83493e08e15a990d Feb 17 12:46:10.849683 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:46:10.849659 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c5191ba_7e5f_4e80_8ce5_eac97ea608dc.slice/crio-0f84f961fa2d4592fed817aed4869903780ecf2283705b6836f47e7d3b000786 WatchSource:0}: Error finding container 0f84f961fa2d4592fed817aed4869903780ecf2283705b6836f47e7d3b000786: Status 404 returned error can't find the container with id 0f84f961fa2d4592fed817aed4869903780ecf2283705b6836f47e7d3b000786 Feb 17 12:46:10.996845 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:10.996819 2562 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 12:46:11.090298 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.090231 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9x7z\" (UniqueName: \"kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z\") pod \"network-check-target-v7bfv\" (UID: \"89c079c8-3da1-4b99-a30a-3d749cf8f842\") " pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:11.090458 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:11.090376 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 12:46:11.090458 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:11.090394 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 12:46:11.090458 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:11.090403 2562 projected.go:194] Error preparing data for projected volume kube-api-access-w9x7z for pod openshift-network-diagnostics/network-check-target-v7bfv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:11.090643 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:11.090460 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z podName:89c079c8-3da1-4b99-a30a-3d749cf8f842 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:12.090443444 +0000 UTC m=+3.065206825 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w9x7z" (UniqueName: "kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z") pod "network-check-target-v7bfv" (UID: "89c079c8-3da1-4b99-a30a-3d749cf8f842") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:11.190835 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.190801 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:11.190998 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:11.190962 2562 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:11.191058 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:11.191026 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs podName:7bb13ada-375a-497d-aced-02307525f449 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:12.191007319 +0000 UTC m=+3.165770707 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs") pod "network-metrics-daemon-j6r5z" (UID: "7bb13ada-375a-497d-aced-02307525f449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:11.195358 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.195113 2562 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 12:46:11.516654 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.516533 2562 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-02-17 12:41:10 +0000 UTC" deadline="2027-07-30 13:49:40.944970007 +0000 UTC" Feb 17 12:46:11.516654 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.516641 2562 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12673h3m29.428335054s" Feb 17 12:46:11.539241 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.539215 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:11.539376 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:11.539335 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:11.544640 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.544615 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" event={"ID":"7282992d-70a3-40ce-96b1-03529732a700","Type":"ContainerStarted","Data":"5a7f9dfa7c30a1afe3617557f80b7c9bae2f34629dde6122928c64a2705a00de"} Feb 17 12:46:11.548072 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.548046 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-4cf5r" event={"ID":"26e5edd9-76b4-4050-a8bc-3e88e1993210","Type":"ContainerStarted","Data":"7dc367c90ff07dac75849f2123ead74aa0c1094166a1c8df83493e08e15a990d"} Feb 17 12:46:11.551853 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.551792 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g5h5b" event={"ID":"b6d826a0-9201-49f4-8abb-ecf10f525a7e","Type":"ContainerStarted","Data":"ae5192027990ee43530d8fcb3e9b0f578f1c4e5f8ed3243c8133b0409a0df6e1"} Feb 17 12:46:11.554929 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.554904 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" event={"ID":"47c476ed-1111-4f04-920f-dc8a70a378a0","Type":"ContainerStarted","Data":"39984778977c0a88279bc58db93bbc827035c17b568ed4090b2683a518c6d1f2"} Feb 17 12:46:11.561383 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.561359 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" event={"ID":"4d5674be-49b1-49d0-b07d-656b923994f0","Type":"ContainerStarted","Data":"d0681b9f3ba609cee4e02d3c452932c4facdfbdeb6b98769e91a6060e3eee44e"} Feb 17 12:46:11.564317 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.564294 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wkhkn" event={"ID":"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc","Type":"ContainerStarted","Data":"0f84f961fa2d4592fed817aed4869903780ecf2283705b6836f47e7d3b000786"} Feb 17 12:46:11.571084 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.571058 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsksj" event={"ID":"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173","Type":"ContainerStarted","Data":"e516c142188c6fc1d6eb86a88f488b7e206337fcbad373dfcfacf2f3fced6204"} Feb 17 12:46:11.572319 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.572292 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-h4hcc" event={"ID":"d9f14b23-72e9-4631-b0c5-d568aca52c29","Type":"ContainerStarted","Data":"a54bc5afb0f269eb5b4c306d623c84d2de421f856c3e1814be41d8ca676661d0"} Feb 17 12:46:11.574632 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:11.574609 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7rjvn" event={"ID":"9f08688a-aac4-4adb-b2bd-90ffe60387e3","Type":"ContainerStarted","Data":"7267dd178b1728dbd576dc2271b6a2adc7968d9ff833dbb85cfe968c07e5c428"} Feb 17 12:46:12.099695 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:12.099014 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9x7z\" (UniqueName: \"kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z\") pod \"network-check-target-v7bfv\" (UID: \"89c079c8-3da1-4b99-a30a-3d749cf8f842\") " pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:12.105086 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:12.105060 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 12:46:12.105086 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:12.105088 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 12:46:12.105295 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:12.105101 2562 projected.go:194] Error preparing data for projected volume kube-api-access-w9x7z for pod openshift-network-diagnostics/network-check-target-v7bfv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:12.105295 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:12.105186 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z podName:89c079c8-3da1-4b99-a30a-3d749cf8f842 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:14.10516667 +0000 UTC m=+5.079930042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w9x7z" (UniqueName: "kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z") pod "network-check-target-v7bfv" (UID: "89c079c8-3da1-4b99-a30a-3d749cf8f842") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:12.200118 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:12.200077 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:12.200279 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:12.200223 2562 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:12.200334 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:12.200286 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs podName:7bb13ada-375a-497d-aced-02307525f449 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:14.200268294 +0000 UTC m=+5.175031683 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs") pod "network-metrics-daemon-j6r5z" (UID: "7bb13ada-375a-497d-aced-02307525f449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:12.517222 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:12.517119 2562 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-02-17 12:41:10 +0000 UTC" deadline="2027-08-31 08:51:02.096704233 +0000 UTC" Feb 17 12:46:12.517222 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:12.517157 2562 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13436h4m49.579551357s" Feb 17 12:46:12.537613 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:12.537587 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:12.537762 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:12.537712 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:13.453060 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:13.452822 2562 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 12:46:13.539615 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:13.539124 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:13.539615 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:13.539247 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:14.115275 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:14.115235 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9x7z\" (UniqueName: \"kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z\") pod \"network-check-target-v7bfv\" (UID: \"89c079c8-3da1-4b99-a30a-3d749cf8f842\") " pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:14.115424 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:14.115398 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 12:46:14.115424 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:14.115421 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 12:46:14.115576 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:14.115433 2562 projected.go:194] Error preparing data for projected volume kube-api-access-w9x7z for pod openshift-network-diagnostics/network-check-target-v7bfv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:14.115576 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:14.115514 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z podName:89c079c8-3da1-4b99-a30a-3d749cf8f842 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:18.115496061 +0000 UTC m=+9.090259443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w9x7z" (UniqueName: "kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z") pod "network-check-target-v7bfv" (UID: "89c079c8-3da1-4b99-a30a-3d749cf8f842") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:14.216053 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:14.216016 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:14.216242 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:14.216180 2562 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:14.216242 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:14.216243 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs podName:7bb13ada-375a-497d-aced-02307525f449 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:18.216225571 +0000 UTC m=+9.190988946 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs") pod "network-metrics-daemon-j6r5z" (UID: "7bb13ada-375a-497d-aced-02307525f449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:14.537383 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:14.536735 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:14.537383 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:14.536875 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:14.598788 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:14.598747 2562 generic.go:358] "Generic (PLEG): container finished" podID="6ab42bf9151db011cb6080c20a9ad910" containerID="1ee0d1db2c6c2f6b0971191161d87355421b97eb58ea835642251c5bc55fdbae" exitCode=0 Feb 17 12:46:14.599210 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:14.598804 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" event={"ID":"6ab42bf9151db011cb6080c20a9ad910","Type":"ContainerDied","Data":"1ee0d1db2c6c2f6b0971191161d87355421b97eb58ea835642251c5bc55fdbae"} Feb 17 12:46:15.536820 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:15.536780 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:15.537006 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:15.536977 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:16.536723 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:16.536689 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:16.537190 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:16.536864 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:17.172867 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.172376 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-xk97p"] Feb 17 12:46:17.176503 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.176435 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:17.176627 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:17.176530 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:17.242367 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.242312 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/f84c60a8-1cec-4e96-8e96-b52be431e4ed-kubelet-config\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:17.242367 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.242363 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/f84c60a8-1cec-4e96-8e96-b52be431e4ed-dbus\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:17.242639 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.242511 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:17.342853 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.342819 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/f84c60a8-1cec-4e96-8e96-b52be431e4ed-kubelet-config\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:17.343061 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.342867 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/f84c60a8-1cec-4e96-8e96-b52be431e4ed-dbus\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:17.343061 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.342947 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:17.343179 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:17.343086 2562 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:17.343179 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.343139 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/f84c60a8-1cec-4e96-8e96-b52be431e4ed-kubelet-config\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:17.343282 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:17.343184 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret podName:f84c60a8-1cec-4e96-8e96-b52be431e4ed nodeName:}" failed. No retries permitted until 2026-02-17 12:46:17.84316619 +0000 UTC m=+8.817929566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret") pod "global-pull-secret-syncer-xk97p" (UID: "f84c60a8-1cec-4e96-8e96-b52be431e4ed") : object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:17.343376 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.343294 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/f84c60a8-1cec-4e96-8e96-b52be431e4ed-dbus\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:17.536896 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.536817 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:17.537375 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:17.536931 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:17.847063 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:17.846901 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:17.847063 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:17.847036 2562 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:17.847276 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:17.847135 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret podName:f84c60a8-1cec-4e96-8e96-b52be431e4ed nodeName:}" failed. No retries permitted until 2026-02-17 12:46:18.847113131 +0000 UTC m=+9.821876527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret") pod "global-pull-secret-syncer-xk97p" (UID: "f84c60a8-1cec-4e96-8e96-b52be431e4ed") : object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:18.149527 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:18.149443 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9x7z\" (UniqueName: \"kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z\") pod \"network-check-target-v7bfv\" (UID: \"89c079c8-3da1-4b99-a30a-3d749cf8f842\") " pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:18.149652 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:18.149583 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 12:46:18.149652 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:18.149603 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 12:46:18.149652 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:18.149613 2562 projected.go:194] Error preparing data for projected volume kube-api-access-w9x7z for pod openshift-network-diagnostics/network-check-target-v7bfv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:18.149762 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:18.149657 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z podName:89c079c8-3da1-4b99-a30a-3d749cf8f842 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:26.149644085 +0000 UTC m=+17.124407458 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-w9x7z" (UniqueName: "kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z") pod "network-check-target-v7bfv" (UID: "89c079c8-3da1-4b99-a30a-3d749cf8f842") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:18.250381 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:18.250319 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:18.250589 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:18.250458 2562 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:18.250589 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:18.250577 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs podName:7bb13ada-375a-497d-aced-02307525f449 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:26.250553617 +0000 UTC m=+17.225317013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs") pod "network-metrics-daemon-j6r5z" (UID: "7bb13ada-375a-497d-aced-02307525f449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:18.536943 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:18.536843 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:18.537423 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:18.536974 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:18.855308 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:18.855232 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:18.855511 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:18.855364 2562 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:18.855511 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:18.855436 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret podName:f84c60a8-1cec-4e96-8e96-b52be431e4ed nodeName:}" failed. No retries permitted until 2026-02-17 12:46:20.855415079 +0000 UTC m=+11.830178464 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret") pod "global-pull-secret-syncer-xk97p" (UID: "f84c60a8-1cec-4e96-8e96-b52be431e4ed") : object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:19.537162 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:19.537127 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:19.537616 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:19.537228 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:19.537616 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:19.537529 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:19.537723 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:19.537613 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:20.537186 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:20.537132 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:20.537650 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:20.537321 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:20.871218 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:20.871153 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:20.871345 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:20.871324 2562 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:20.871413 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:20.871403 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret podName:f84c60a8-1cec-4e96-8e96-b52be431e4ed nodeName:}" failed. No retries permitted until 2026-02-17 12:46:24.871381749 +0000 UTC m=+15.846145232 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret") pod "global-pull-secret-syncer-xk97p" (UID: "f84c60a8-1cec-4e96-8e96-b52be431e4ed") : object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:21.539131 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:21.539104 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:21.539543 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:21.539105 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:21.539543 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:21.539225 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:21.539543 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:21.539290 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:22.537144 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:22.537115 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:22.537318 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:22.537225 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:23.539780 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:23.539755 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:23.540152 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:23.539755 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:23.540152 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:23.539861 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:23.540152 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:23.539947 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:24.536807 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:24.536761 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:24.536887 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:24.536857 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:24.905270 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:24.905189 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:24.905707 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:24.905334 2562 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:24.905707 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:24.905408 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret podName:f84c60a8-1cec-4e96-8e96-b52be431e4ed nodeName:}" failed. No retries permitted until 2026-02-17 12:46:32.905387509 +0000 UTC m=+23.880150881 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret") pod "global-pull-secret-syncer-xk97p" (UID: "f84c60a8-1cec-4e96-8e96-b52be431e4ed") : object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:25.539295 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:25.539268 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:25.539295 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:25.539295 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:25.539546 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:25.539376 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:25.539546 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:25.539512 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:26.214267 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:26.214232 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9x7z\" (UniqueName: \"kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z\") pod \"network-check-target-v7bfv\" (UID: \"89c079c8-3da1-4b99-a30a-3d749cf8f842\") " pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:26.214732 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:26.214369 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 12:46:26.214732 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:26.214397 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 12:46:26.214732 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:26.214413 2562 projected.go:194] Error preparing data for projected volume kube-api-access-w9x7z for pod openshift-network-diagnostics/network-check-target-v7bfv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:26.214732 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:26.214498 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z podName:89c079c8-3da1-4b99-a30a-3d749cf8f842 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:42.214462722 +0000 UTC m=+33.189226108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-w9x7z" (UniqueName: "kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z") pod "network-check-target-v7bfv" (UID: "89c079c8-3da1-4b99-a30a-3d749cf8f842") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:26.315409 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:26.315377 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:26.315587 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:26.315547 2562 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:26.315639 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:26.315614 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs podName:7bb13ada-375a-497d-aced-02307525f449 nodeName:}" failed. No retries permitted until 2026-02-17 12:46:42.315592679 +0000 UTC m=+33.290356075 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs") pod "network-metrics-daemon-j6r5z" (UID: "7bb13ada-375a-497d-aced-02307525f449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:26.536711 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:26.536684 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:26.536878 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:26.536802 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:27.537223 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:27.537191 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:27.537662 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:27.537313 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:27.537662 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:27.537389 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:27.537662 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:27.537500 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:28.537485 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:28.537440 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:28.537947 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:28.537585 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:29.537647 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.537434 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:29.537962 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:29.537737 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:29.537962 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.537616 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:29.537962 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:29.537826 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:29.622327 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.622294 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-h4hcc" event={"ID":"d9f14b23-72e9-4631-b0c5-d568aca52c29","Type":"ContainerStarted","Data":"c460f6d3d4c83fb592198321ee44058150d9a1bf62b323c4136789e85a26c6da"} Feb 17 12:46:29.625632 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.625601 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7rjvn" event={"ID":"9f08688a-aac4-4adb-b2bd-90ffe60387e3","Type":"ContainerStarted","Data":"a21d3265e76bc733e7ea36e0c98e5840cde028938eef04bcf68b1d49d83490bc"} Feb 17 12:46:29.627748 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.627728 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-4cf5r" event={"ID":"26e5edd9-76b4-4050-a8bc-3e88e1993210","Type":"ContainerStarted","Data":"4be6b76a5479861fe0191a8cc94ab851bd35ff53e654af4460c42bfc35555d17"} Feb 17 12:46:29.629089 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.629063 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g5h5b" event={"ID":"b6d826a0-9201-49f4-8abb-ecf10f525a7e","Type":"ContainerStarted","Data":"f4055f9fb3739386ea3d2bb60db95f04c10ded4990732cf9d60ed8b928989382"} Feb 17 12:46:29.630173 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.630155 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" event={"ID":"4d5674be-49b1-49d0-b07d-656b923994f0","Type":"ContainerStarted","Data":"861d095af86581876f769f67466a0659fc971376a29b07b204fff679f2491721"} Feb 17 12:46:29.631659 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.631642 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" event={"ID":"6ab42bf9151db011cb6080c20a9ad910","Type":"ContainerStarted","Data":"f991e0a25c5234bb80056249c6fac7fbcb23319ab12ed6d0e49d20930c63405a"} Feb 17 12:46:29.632837 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.632816 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal" event={"ID":"789ba31dc9b07674daaa230ccc26ae9e","Type":"ContainerStarted","Data":"de1e2a51658bf0b3d738295367620ea9ab7f8bf7b156e2c09c70b6cbd4381019"} Feb 17 12:46:29.633939 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.633919 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsksj" event={"ID":"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173","Type":"ContainerStarted","Data":"2c747388d4843eac8787fd99fdd38612917c4bb4a6ab90212b244641fd8af494"} Feb 17 12:46:29.634054 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.634016 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-h4hcc" podStartSLOduration=2.2678565920000002 podStartE2EDuration="20.634003379s" podCreationTimestamp="2026-02-17 12:46:09 +0000 UTC" firstStartedPulling="2026-02-17 12:46:10.837822205 +0000 UTC m=+1.812585578" lastFinishedPulling="2026-02-17 12:46:29.203968979 +0000 UTC m=+20.178732365" observedRunningTime="2026-02-17 12:46:29.633714141 +0000 UTC m=+20.608477537" watchObservedRunningTime="2026-02-17 12:46:29.634003379 +0000 UTC m=+20.608766775" Feb 17 12:46:29.657041 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.657014 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-113.ec2.internal" podStartSLOduration=19.657004648 podStartE2EDuration="19.657004648s" podCreationTimestamp="2026-02-17 12:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 12:46:29.644266383 +0000 UTC m=+20.619029779" watchObservedRunningTime="2026-02-17 12:46:29.657004648 +0000 UTC m=+20.631768054" Feb 17 12:46:29.668724 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.668664 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-l2dwp" podStartSLOduration=2.278528298 podStartE2EDuration="20.668655906s" podCreationTimestamp="2026-02-17 12:46:09 +0000 UTC" firstStartedPulling="2026-02-17 12:46:10.816253423 +0000 UTC m=+1.791016797" lastFinishedPulling="2026-02-17 12:46:29.206381016 +0000 UTC m=+20.181144405" observedRunningTime="2026-02-17 12:46:29.657107212 +0000 UTC m=+20.631870585" watchObservedRunningTime="2026-02-17 12:46:29.668655906 +0000 UTC m=+20.643419300" Feb 17 12:46:29.682607 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.682578 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7rjvn" podStartSLOduration=7.270224235 podStartE2EDuration="20.68256947s" podCreationTimestamp="2026-02-17 12:46:09 +0000 UTC" firstStartedPulling="2026-02-17 12:46:10.822338856 +0000 UTC m=+1.797102233" lastFinishedPulling="2026-02-17 12:46:24.234684075 +0000 UTC m=+15.209447468" observedRunningTime="2026-02-17 12:46:29.668221404 +0000 UTC m=+20.642984799" watchObservedRunningTime="2026-02-17 12:46:29.68256947 +0000 UTC m=+20.657332864" Feb 17 12:46:29.706568 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.706532 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-4cf5r" podStartSLOduration=2.348926361 podStartE2EDuration="20.706523977s" podCreationTimestamp="2026-02-17 12:46:09 +0000 UTC" firstStartedPulling="2026-02-17 12:46:10.846461153 +0000 UTC m=+1.821224526" lastFinishedPulling="2026-02-17 12:46:29.204058756 +0000 UTC m=+20.178822142" observedRunningTime="2026-02-17 12:46:29.706275532 +0000 UTC m=+20.681038927" watchObservedRunningTime="2026-02-17 12:46:29.706523977 +0000 UTC m=+20.681287372" Feb 17 12:46:29.706759 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.706740 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g5h5b" podStartSLOduration=2.25853784 podStartE2EDuration="20.706735363s" podCreationTimestamp="2026-02-17 12:46:09 +0000 UTC" firstStartedPulling="2026-02-17 12:46:10.832088575 +0000 UTC m=+1.806851954" lastFinishedPulling="2026-02-17 12:46:29.280286086 +0000 UTC m=+20.255049477" observedRunningTime="2026-02-17 12:46:29.682141498 +0000 UTC m=+20.656904893" watchObservedRunningTime="2026-02-17 12:46:29.706735363 +0000 UTC m=+20.681498758" Feb 17 12:46:29.718326 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:29.718298 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-113.ec2.internal" podStartSLOduration=19.718288829 podStartE2EDuration="19.718288829s" podCreationTimestamp="2026-02-17 12:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 12:46:29.718078745 +0000 UTC m=+20.692842141" watchObservedRunningTime="2026-02-17 12:46:29.718288829 +0000 UTC m=+20.693052224" Feb 17 12:46:30.167693 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.167496 2562 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:30.168152 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.168138 2562 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:30.537017 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.536990 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:30.537170 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:30.537106 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:30.638864 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.638831 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" event={"ID":"47c476ed-1111-4f04-920f-dc8a70a378a0","Type":"ContainerStarted","Data":"5486b10e52663d1b34d4cb64f1cdac5de6ff9785dd877e7db309780e5a5594c4"} Feb 17 12:46:30.640321 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.640293 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wkhkn" event={"ID":"5c5191ba-7e5f-4e80-8ce5-eac97ea608dc","Type":"ContainerStarted","Data":"d3cf665f1aade56e5279fb5232c94f192a1ffe0348438747e05a2e9e448b13ba"} Feb 17 12:46:30.641760 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.641656 2562 generic.go:358] "Generic (PLEG): container finished" podID="ea2f2f4d-70c4-4bc8-9640-5b18d2e41173" containerID="2c747388d4843eac8787fd99fdd38612917c4bb4a6ab90212b244641fd8af494" exitCode=0 Feb 17 12:46:30.641760 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.641730 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsksj" event={"ID":"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173","Type":"ContainerDied","Data":"2c747388d4843eac8787fd99fdd38612917c4bb4a6ab90212b244641fd8af494"} Feb 17 12:46:30.645169 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.645076 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" event={"ID":"7282992d-70a3-40ce-96b1-03529732a700","Type":"ContainerStarted","Data":"179c4fb7bce81e81ba24ed09e9708de32ba53dfe225183aa4bb53f301c5808ef"} Feb 17 12:46:30.645169 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.645121 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" event={"ID":"7282992d-70a3-40ce-96b1-03529732a700","Type":"ContainerStarted","Data":"99144cdd150730a45ac56794874dbbd025ed199bd0b1943851e99bf36a6d6798"} Feb 17 12:46:30.645169 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.645136 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" event={"ID":"7282992d-70a3-40ce-96b1-03529732a700","Type":"ContainerStarted","Data":"1ffde7af01c08ff7ef6e2a10d2e54555656a44a5ab4bd1150154a38a0bfdbb24"} Feb 17 12:46:30.645169 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.645149 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" event={"ID":"7282992d-70a3-40ce-96b1-03529732a700","Type":"ContainerStarted","Data":"59497d4719fbc08bbda12deef8df6f647a551c2f220091c213f44621bd2c3002"} Feb 17 12:46:30.645169 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.645162 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" event={"ID":"7282992d-70a3-40ce-96b1-03529732a700","Type":"ContainerStarted","Data":"68522ae3c13687559f57d0272b5a4eb8d204c535ba32652c7c1a6117c8276d26"} Feb 17 12:46:30.645169 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.645173 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" event={"ID":"7282992d-70a3-40ce-96b1-03529732a700","Type":"ContainerStarted","Data":"7426c807e1981dae84e66573a58be02523fa405cbdc00363036c65d68c56f6f9"} Feb 17 12:46:30.645870 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.645849 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:30.646358 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.646330 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-4cf5r" Feb 17 12:46:30.653433 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.653388 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-wkhkn" podStartSLOduration=2.342157948 podStartE2EDuration="20.65337378s" podCreationTimestamp="2026-02-17 12:46:10 +0000 UTC" firstStartedPulling="2026-02-17 12:46:10.851575676 +0000 UTC m=+1.826339052" lastFinishedPulling="2026-02-17 12:46:29.162791508 +0000 UTC m=+20.137554884" observedRunningTime="2026-02-17 12:46:30.653176062 +0000 UTC m=+21.627939456" watchObservedRunningTime="2026-02-17 12:46:30.65337378 +0000 UTC m=+21.628137175" Feb 17 12:46:30.719423 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:30.719398 2562 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Feb 17 12:46:31.537339 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:31.536951 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:31.537339 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:31.537068 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:31.537567 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:31.536951 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:31.537567 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:31.537455 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:31.549619 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:31.549529 2562 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-02-17T12:46:30.719418034Z","UUID":"121038cb-7c84-40d2-ba15-910665285649","Handler":null,"Name":"","Endpoint":""} Feb 17 12:46:31.552759 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:31.552644 2562 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Feb 17 12:46:31.552759 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:31.552672 2562 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Feb 17 12:46:31.649114 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:31.648952 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" event={"ID":"47c476ed-1111-4f04-920f-dc8a70a378a0","Type":"ContainerStarted","Data":"1a53051cfe6cc82cd385788c0f1281de3ca9ed54dfd3e2935c37a7b61cf0af3e"} Feb 17 12:46:31.649505 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:31.649126 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" event={"ID":"47c476ed-1111-4f04-920f-dc8a70a378a0","Type":"ContainerStarted","Data":"8768ba2d318432041e3ce9e32f1c8cd5a3e2f00132f2ae8015daa5a8d0099c5a"} Feb 17 12:46:31.669523 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:31.667173 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-6mljf" podStartSLOduration=1.9906709949999999 podStartE2EDuration="22.667158351s" podCreationTimestamp="2026-02-17 12:46:09 +0000 UTC" firstStartedPulling="2026-02-17 12:46:10.830051691 +0000 UTC m=+1.804815076" lastFinishedPulling="2026-02-17 12:46:31.506539055 +0000 UTC m=+22.481302432" observedRunningTime="2026-02-17 12:46:31.666541825 +0000 UTC m=+22.641305211" watchObservedRunningTime="2026-02-17 12:46:31.667158351 +0000 UTC m=+22.641921746" Feb 17 12:46:32.537215 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:32.537140 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:32.537370 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:32.537263 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:32.653694 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:32.653644 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" event={"ID":"7282992d-70a3-40ce-96b1-03529732a700","Type":"ContainerStarted","Data":"e744ea2afb637e6f1cbf4ee05f4536046d065a5cd69a24825b1564b08a06fba3"} Feb 17 12:46:32.965388 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:32.965315 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:32.965560 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:32.965433 2562 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:32.965560 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:32.965517 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret podName:f84c60a8-1cec-4e96-8e96-b52be431e4ed nodeName:}" failed. No retries permitted until 2026-02-17 12:46:48.965498651 +0000 UTC m=+39.940262029 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret") pod "global-pull-secret-syncer-xk97p" (UID: "f84c60a8-1cec-4e96-8e96-b52be431e4ed") : object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:33.536652 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:33.536621 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:33.536652 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:33.536645 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:33.536866 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:33.536748 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:33.536916 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:33.536884 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:34.536778 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:34.536757 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:34.537399 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:34.536880 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:34.661337 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:34.661023 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" event={"ID":"7282992d-70a3-40ce-96b1-03529732a700","Type":"ContainerStarted","Data":"f9845f631a697a1b945944072ff1827ea8580e39e4757a43b312d12b2d779ea4"} Feb 17 12:46:34.661438 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:34.661361 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:34.661438 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:34.661399 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:34.678143 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:34.678107 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:34.687489 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:34.687406 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" podStartSLOduration=6.732595442 podStartE2EDuration="25.687389226s" podCreationTimestamp="2026-02-17 12:46:09 +0000 UTC" firstStartedPulling="2026-02-17 12:46:10.792207358 +0000 UTC m=+1.766970735" lastFinishedPulling="2026-02-17 12:46:29.747001135 +0000 UTC m=+20.721764519" observedRunningTime="2026-02-17 12:46:34.685303993 +0000 UTC m=+25.660067426" watchObservedRunningTime="2026-02-17 12:46:34.687389226 +0000 UTC m=+25.662152625" Feb 17 12:46:35.536549 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:35.536521 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:35.536694 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:35.536524 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:35.536756 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:35.536616 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:35.536756 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:35.536708 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:35.663670 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:35.663644 2562 generic.go:358] "Generic (PLEG): container finished" podID="ea2f2f4d-70c4-4bc8-9640-5b18d2e41173" containerID="4acdeaf8fbf8076466b7c3f47fd058813fa134a708bbe8b02eeaa52b0abbe12f" exitCode=0 Feb 17 12:46:35.663807 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:35.663723 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsksj" event={"ID":"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173","Type":"ContainerDied","Data":"4acdeaf8fbf8076466b7c3f47fd058813fa134a708bbe8b02eeaa52b0abbe12f"} Feb 17 12:46:35.664775 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:35.664253 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:35.678333 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:35.678311 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:46:36.500171 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:36.500143 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-v7bfv"] Feb 17 12:46:36.500372 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:36.500242 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:36.500372 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:36.500342 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:36.503341 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:36.503314 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-xk97p"] Feb 17 12:46:36.503491 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:36.503416 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:36.503554 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:36.503520 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:36.503958 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:36.503935 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j6r5z"] Feb 17 12:46:36.504052 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:36.504034 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:36.504163 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:36.504128 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:37.669087 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:37.669054 2562 generic.go:358] "Generic (PLEG): container finished" podID="ea2f2f4d-70c4-4bc8-9640-5b18d2e41173" containerID="6c83b7f8d8623e487e421c68ff10bf61b5cd832571b4261e0eebce567302b9a6" exitCode=0 Feb 17 12:46:37.669558 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:37.669133 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsksj" event={"ID":"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173","Type":"ContainerDied","Data":"6c83b7f8d8623e487e421c68ff10bf61b5cd832571b4261e0eebce567302b9a6"} Feb 17 12:46:38.536825 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:38.536799 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:38.536941 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:38.536799 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:38.536941 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:38.536901 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:38.537025 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:38.536799 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:38.537025 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:38.536967 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:38.537096 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:38.537038 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:38.672910 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:38.672889 2562 generic.go:358] "Generic (PLEG): container finished" podID="ea2f2f4d-70c4-4bc8-9640-5b18d2e41173" containerID="9854b45bdd4096ff0b4eff334a6141bbfa9a91e6a36a1d72da50b056ff2b95af" exitCode=0 Feb 17 12:46:38.673238 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:38.672921 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsksj" event={"ID":"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173","Type":"ContainerDied","Data":"9854b45bdd4096ff0b4eff334a6141bbfa9a91e6a36a1d72da50b056ff2b95af"} Feb 17 12:46:40.537517 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:40.537323 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:40.538043 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:40.537323 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:40.538043 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:40.537621 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:40.538043 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:40.537688 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:40.538043 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:40.537327 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:40.538043 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:40.537773 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:42.241497 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:42.241442 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9x7z\" (UniqueName: \"kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z\") pod \"network-check-target-v7bfv\" (UID: \"89c079c8-3da1-4b99-a30a-3d749cf8f842\") " pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:42.241974 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:42.241637 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 12:46:42.241974 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:42.241664 2562 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 12:46:42.241974 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:42.241677 2562 projected.go:194] Error preparing data for projected volume kube-api-access-w9x7z for pod openshift-network-diagnostics/network-check-target-v7bfv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:42.241974 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:42.241744 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z podName:89c079c8-3da1-4b99-a30a-3d749cf8f842 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:14.241722842 +0000 UTC m=+65.216486215 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-w9x7z" (UniqueName: "kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z") pod "network-check-target-v7bfv" (UID: "89c079c8-3da1-4b99-a30a-3d749cf8f842") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 12:46:42.342758 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:42.342723 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:42.342934 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:42.342830 2562 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:42.342934 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:42.342894 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs podName:7bb13ada-375a-497d-aced-02307525f449 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:14.342875704 +0000 UTC m=+65.317639090 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs") pod "network-metrics-daemon-j6r5z" (UID: "7bb13ada-375a-497d-aced-02307525f449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 12:46:42.537299 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:42.537225 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:42.537299 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:42.537254 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:42.537534 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:42.537216 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:42.537534 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:42.537342 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:42.537534 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:42.537445 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:42.537666 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:42.537569 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:44.536522 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:44.536489 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:44.536522 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:44.536503 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:44.537062 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:44.536586 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:44.537062 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:44.536489 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:44.537062 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:44.536675 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:44.537062 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:44.536751 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:44.684511 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:44.684450 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsksj" event={"ID":"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173","Type":"ContainerStarted","Data":"84ae4d9a079815f73e003f3427be796e5126c0f000405bbc91ca74c258c43772"} Feb 17 12:46:45.688238 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:45.688200 2562 generic.go:358] "Generic (PLEG): container finished" podID="ea2f2f4d-70c4-4bc8-9640-5b18d2e41173" containerID="84ae4d9a079815f73e003f3427be796e5126c0f000405bbc91ca74c258c43772" exitCode=0 Feb 17 12:46:45.688772 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:45.688253 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsksj" event={"ID":"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173","Type":"ContainerDied","Data":"84ae4d9a079815f73e003f3427be796e5126c0f000405bbc91ca74c258c43772"} Feb 17 12:46:46.536776 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:46.536702 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:46.536776 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:46.536736 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:46.536933 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:46.536702 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:46.536933 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:46.536793 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:46.536933 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:46.536886 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:46.537026 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:46.536959 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:46.692288 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:46.692264 2562 generic.go:358] "Generic (PLEG): container finished" podID="ea2f2f4d-70c4-4bc8-9640-5b18d2e41173" containerID="4a2ef332fca5c15433df3ae6c324a8975beab932def0cca4451a6523c3a9059c" exitCode=0 Feb 17 12:46:46.692652 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:46.692310 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsksj" event={"ID":"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173","Type":"ContainerDied","Data":"4a2ef332fca5c15433df3ae6c324a8975beab932def0cca4451a6523c3a9059c"} Feb 17 12:46:47.696391 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:47.696353 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tsksj" event={"ID":"ea2f2f4d-70c4-4bc8-9640-5b18d2e41173","Type":"ContainerStarted","Data":"d24fe259355e5210602e12cfbd4717a7825096359f426e5e9493b61dd78b1e41"} Feb 17 12:46:47.719417 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:47.719368 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tsksj" podStartSLOduration=5.049935982 podStartE2EDuration="38.719356363s" podCreationTimestamp="2026-02-17 12:46:09 +0000 UTC" firstStartedPulling="2026-02-17 12:46:10.84191978 +0000 UTC m=+1.816683168" lastFinishedPulling="2026-02-17 12:46:44.511340173 +0000 UTC m=+35.486103549" observedRunningTime="2026-02-17 12:46:47.719002269 +0000 UTC m=+38.693765664" watchObservedRunningTime="2026-02-17 12:46:47.719356363 +0000 UTC m=+38.694119757" Feb 17 12:46:48.537099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:48.537066 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:48.537099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:48.537094 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:48.537332 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:48.537153 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:48.537332 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:48.537186 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:48.537332 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:48.537285 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:48.537435 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:48.537333 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:48.998022 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:48.997996 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:48.998343 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:48.998100 2562 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:48.998343 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:48.998150 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret podName:f84c60a8-1cec-4e96-8e96-b52be431e4ed nodeName:}" failed. No retries permitted until 2026-02-17 12:47:20.998137351 +0000 UTC m=+71.972900724 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret") pod "global-pull-secret-syncer-xk97p" (UID: "f84c60a8-1cec-4e96-8e96-b52be431e4ed") : object "kube-system"/"original-pull-secret" not registered Feb 17 12:46:50.537378 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:50.537342 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:50.537813 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:50.537344 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:50.537813 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:50.537453 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:50.537813 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:50.537344 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:50.537813 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:50.537523 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:50.537813 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:50.537575 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:52.536741 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:52.536710 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:52.537375 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:52.536710 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:52.537375 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:52.536832 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:52.537375 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:52.536710 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:52.537375 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:52.536878 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:52.537375 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:52.536948 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:54.536819 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:54.536789 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:54.536819 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:54.536812 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:54.537187 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:54.536888 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:54.537187 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:54.536915 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:54.537187 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:54.536984 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:54.537187 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:54.537060 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:56.536690 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:56.536660 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:56.537147 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:56.536660 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:56.537147 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:56.536747 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:56.537147 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:56.536826 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:46:56.537147 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:56.536660 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:56.537147 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:56.536900 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:58.536758 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:58.536729 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:46:58.537153 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:58.536732 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:46:58.537153 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:58.536825 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:46:58.537153 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:58.536936 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:46:58.537153 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:46:58.536736 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:46:58.537153 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:46:58.537012 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:47:00.536793 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:00.536761 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:47:00.537296 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:00.536767 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:47:00.537296 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:00.536766 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:47:00.537296 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:00.536971 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:47:00.537296 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:00.536847 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:47:00.537296 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:00.537033 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:47:02.537234 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:02.537059 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:47:02.537743 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:02.537063 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:47:02.537743 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:02.537294 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-xk97p" podUID="f84c60a8-1cec-4e96-8e96-b52be431e4ed" Feb 17 12:47:02.537743 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:02.537394 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v7bfv" podUID="89c079c8-3da1-4b99-a30a-3d749cf8f842" Feb 17 12:47:02.537743 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:02.537062 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:47:02.537743 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:02.537523 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:47:04.319247 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.319217 2562 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-113.ec2.internal" event="NodeReady" Feb 17 12:47:04.319567 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.319310 2562 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Feb 17 12:47:04.350230 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.350174 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9"] Feb 17 12:47:04.353808 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.353108 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-799fd54bdd-lm8dz"] Feb 17 12:47:04.353808 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.353215 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:04.355834 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.355817 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 17 12:47:04.355994 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.355976 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 17 12:47:04.356174 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.356160 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-95vn7\"" Feb 17 12:47:04.356412 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.356392 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x"] Feb 17 12:47:04.356569 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.356555 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.358754 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.358737 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 17 12:47:04.358844 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.358771 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-68g2c\"" Feb 17 12:47:04.358977 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.358954 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Feb 17 12:47:04.358977 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.358960 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 17 12:47:04.359283 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.359254 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm"] Feb 17 12:47:04.359406 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.359388 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.361837 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.361817 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Feb 17 12:47:04.361955 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.361821 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Feb 17 12:47:04.361955 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.361847 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Feb 17 12:47:04.362094 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.362076 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Feb 17 12:47:04.362184 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.362161 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9"] Feb 17 12:47:04.362311 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.362291 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" Feb 17 12:47:04.366319 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.366293 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 17 12:47:04.366413 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.366321 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-swdhc\"" Feb 17 12:47:04.366740 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.366717 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9"] Feb 17 12:47:04.366931 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.366911 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Feb 17 12:47:04.367112 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.366921 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.367786 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.367763 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-799fd54bdd-lm8dz"] Feb 17 12:47:04.369614 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.369595 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert\"" Feb 17 12:47:04.370003 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.369981 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x"] Feb 17 12:47:04.370138 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.370119 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-hub-kubeconfig\"" Feb 17 12:47:04.370389 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.370370 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-ca\"" Feb 17 12:47:04.370750 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.370374 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-service-proxy-server-certificates\"" Feb 17 12:47:04.371353 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.371307 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm"] Feb 17 12:47:04.372061 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.372042 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9"] Feb 17 12:47:04.375262 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.375241 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6hddw"] Feb 17 12:47:04.378364 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.378347 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:04.380884 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.380868 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 17 12:47:04.381005 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.380978 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 17 12:47:04.381005 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.380988 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9qj8f\"" Feb 17 12:47:04.381119 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.380951 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 17 12:47:04.385516 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.385496 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6hddw"] Feb 17 12:47:04.413829 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.413808 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.413942 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.413838 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.413942 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.413859 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e0da837-f776-4a61-a361-e3a2c5f4a750-ca-trust-extracted\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.413942 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.413884 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51aee5ab-5849-470c-af18-77488ddcced7-tmp\") pod \"klusterlet-addon-workmgr-5c97f697b9-dxl5x\" (UID: \"51aee5ab-5849-470c-af18-77488ddcced7\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.413942 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.413901 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-image-registry-private-configuration\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.414142 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.413954 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/51aee5ab-5849-470c-af18-77488ddcced7-klusterlet-config\") pod \"klusterlet-addon-workmgr-5c97f697b9-dxl5x\" (UID: \"51aee5ab-5849-470c-af18-77488ddcced7\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.414142 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.413998 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-certificates\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.414142 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414037 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.414142 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414077 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/c8f05b6f-e392-41e6-8041-5c973696dbb5-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-6b9d458cff-gkddm\" (UID: \"c8f05b6f-e392-41e6-8041-5c973696dbb5\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" Feb 17 12:47:04.414142 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414104 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-trusted-ca\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.414376 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414142 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebe280b5-2859-4a11-b980-08a9489a08c6-nginx-conf\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:04.414376 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414170 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:04.414376 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414196 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-installation-pull-secrets\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.414376 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414218 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-ca\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.414376 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414244 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/c4816a20-7b75-4aa1-a0f3-b366e10dba38-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.414376 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414268 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdb7b\" (UniqueName: \"kubernetes.io/projected/ed10247d-2496-4512-b5e3-24cf0aa60754-kube-api-access-vdb7b\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:04.414376 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414312 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlvwv\" (UniqueName: \"kubernetes.io/projected/c8f05b6f-e392-41e6-8041-5c973696dbb5-kube-api-access-zlvwv\") pod \"managed-serviceaccount-addon-agent-6b9d458cff-gkddm\" (UID: \"c8f05b6f-e392-41e6-8041-5c973696dbb5\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" Feb 17 12:47:04.414376 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414336 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgz9n\" (UniqueName: \"kubernetes.io/projected/c4816a20-7b75-4aa1-a0f3-b366e10dba38-kube-api-access-fgz9n\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.414376 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414360 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:04.414768 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414399 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77smq\" (UniqueName: \"kubernetes.io/projected/51aee5ab-5849-470c-af18-77488ddcced7-kube-api-access-77smq\") pod \"klusterlet-addon-workmgr-5c97f697b9-dxl5x\" (UID: \"51aee5ab-5849-470c-af18-77488ddcced7\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.414768 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414426 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66czm\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-kube-api-access-66czm\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.414768 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414450 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-hub\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.414768 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.414510 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-bound-sa-token\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.467748 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.467721 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-p2fjr"] Feb 17 12:47:04.470927 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.470909 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.473408 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.473392 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 17 12:47:04.473408 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.473394 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-944wh\"" Feb 17 12:47:04.473568 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.473413 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 17 12:47:04.478049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.478031 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p2fjr"] Feb 17 12:47:04.515737 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.515718 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:04.515828 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.515748 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.515828 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.515766 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77smq\" (UniqueName: \"kubernetes.io/projected/51aee5ab-5849-470c-af18-77488ddcced7-kube-api-access-77smq\") pod \"klusterlet-addon-workmgr-5c97f697b9-dxl5x\" (UID: \"51aee5ab-5849-470c-af18-77488ddcced7\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.515828 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.515785 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-hub\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.515828 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.515803 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-bound-sa-token\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.516022 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:04.515857 2562 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 12:47:04.516022 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:04.515914 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert podName:ed10247d-2496-4512-b5e3-24cf0aa60754 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:05.015895694 +0000 UTC m=+55.990659074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert") pod "ingress-canary-6hddw" (UID: "ed10247d-2496-4512-b5e3-24cf0aa60754") : secret "canary-serving-cert" not found Feb 17 12:47:04.516022 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.515953 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/773c39a0-9562-40a2-a21c-4ae429c7b782-config-volume\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.516022 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.515987 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e0da837-f776-4a61-a361-e3a2c5f4a750-ca-trust-extracted\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.516022 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516013 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51aee5ab-5849-470c-af18-77488ddcced7-tmp\") pod \"klusterlet-addon-workmgr-5c97f697b9-dxl5x\" (UID: \"51aee5ab-5849-470c-af18-77488ddcced7\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.516175 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516043 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-image-registry-private-configuration\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.516175 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516117 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/51aee5ab-5849-470c-af18-77488ddcced7-klusterlet-config\") pod \"klusterlet-addon-workmgr-5c97f697b9-dxl5x\" (UID: \"51aee5ab-5849-470c-af18-77488ddcced7\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.516175 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516142 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.516175 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516164 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-trusted-ca\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.516349 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516180 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-installation-pull-secrets\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.516349 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516272 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/c4816a20-7b75-4aa1-a0f3-b366e10dba38-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.516349 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516299 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdb7b\" (UniqueName: \"kubernetes.io/projected/ed10247d-2496-4512-b5e3-24cf0aa60754-kube-api-access-vdb7b\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:04.516349 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516321 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwfz\" (UniqueName: \"kubernetes.io/projected/773c39a0-9562-40a2-a21c-4ae429c7b782-kube-api-access-ggwfz\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.516349 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516344 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebe280b5-2859-4a11-b980-08a9489a08c6-nginx-conf\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:04.516611 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516369 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:04.516611 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516396 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-ca\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.516611 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516444 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.516611 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516497 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zlvwv\" (UniqueName: \"kubernetes.io/projected/c8f05b6f-e392-41e6-8041-5c973696dbb5-kube-api-access-zlvwv\") pod \"managed-serviceaccount-addon-agent-6b9d458cff-gkddm\" (UID: \"c8f05b6f-e392-41e6-8041-5c973696dbb5\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" Feb 17 12:47:04.516611 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516526 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fgz9n\" (UniqueName: \"kubernetes.io/projected/c4816a20-7b75-4aa1-a0f3-b366e10dba38-kube-api-access-fgz9n\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.516611 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516563 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51aee5ab-5849-470c-af18-77488ddcced7-tmp\") pod \"klusterlet-addon-workmgr-5c97f697b9-dxl5x\" (UID: \"51aee5ab-5849-470c-af18-77488ddcced7\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.516611 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516572 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-66czm\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-kube-api-access-66czm\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.516611 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:04.516592 2562 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 17 12:47:04.516984 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516615 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/773c39a0-9562-40a2-a21c-4ae429c7b782-tmp-dir\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.516984 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:04.516714 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert podName:ebe280b5-2859-4a11-b980-08a9489a08c6 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:05.016623373 +0000 UTC m=+55.991386748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert") pod "networking-console-plugin-5b897fd5fc-9ppd9" (UID: "ebe280b5-2859-4a11-b980-08a9489a08c6") : secret "networking-console-plugin-cert" not found Feb 17 12:47:04.517108 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.517049 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/c4816a20-7b75-4aa1-a0f3-b366e10dba38-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.517108 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.517061 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.517198 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.517111 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-certificates\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.517198 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.517129 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-trusted-ca\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.517198 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:04.517150 2562 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Feb 17 12:47:04.517198 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:04.517165 2562 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-799fd54bdd-lm8dz: secret "image-registry-tls" not found Feb 17 12:47:04.517393 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:04.517224 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls podName:5e0da837-f776-4a61-a361-e3a2c5f4a750 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:05.017209362 +0000 UTC m=+55.991972736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls") pod "image-registry-799fd54bdd-lm8dz" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750") : secret "image-registry-tls" not found Feb 17 12:47:04.517393 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.517149 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/c8f05b6f-e392-41e6-8041-5c973696dbb5-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-6b9d458cff-gkddm\" (UID: \"c8f05b6f-e392-41e6-8041-5c973696dbb5\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" Feb 17 12:47:04.517495 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.516369 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e0da837-f776-4a61-a361-e3a2c5f4a750-ca-trust-extracted\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.518049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.517953 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebe280b5-2859-4a11-b980-08a9489a08c6-nginx-conf\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:04.518873 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.518851 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-certificates\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.521606 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.521580 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-installation-pull-secrets\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.521692 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.521605 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-image-registry-private-configuration\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.521692 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.521608 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.521692 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.521639 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-hub\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.521692 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.521669 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-ca\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.521912 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.521822 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/51aee5ab-5849-470c-af18-77488ddcced7-klusterlet-config\") pod \"klusterlet-addon-workmgr-5c97f697b9-dxl5x\" (UID: \"51aee5ab-5849-470c-af18-77488ddcced7\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.522080 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.522059 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/c8f05b6f-e392-41e6-8041-5c973696dbb5-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-6b9d458cff-gkddm\" (UID: \"c8f05b6f-e392-41e6-8041-5c973696dbb5\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" Feb 17 12:47:04.522353 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.522334 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/c4816a20-7b75-4aa1-a0f3-b366e10dba38-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.525138 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.525046 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77smq\" (UniqueName: \"kubernetes.io/projected/51aee5ab-5849-470c-af18-77488ddcced7-kube-api-access-77smq\") pod \"klusterlet-addon-workmgr-5c97f697b9-dxl5x\" (UID: \"51aee5ab-5849-470c-af18-77488ddcced7\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.525138 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.525092 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlvwv\" (UniqueName: \"kubernetes.io/projected/c8f05b6f-e392-41e6-8041-5c973696dbb5-kube-api-access-zlvwv\") pod \"managed-serviceaccount-addon-agent-6b9d458cff-gkddm\" (UID: \"c8f05b6f-e392-41e6-8041-5c973696dbb5\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" Feb 17 12:47:04.525298 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.525145 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-bound-sa-token\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.525963 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.525938 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgz9n\" (UniqueName: \"kubernetes.io/projected/c4816a20-7b75-4aa1-a0f3-b366e10dba38-kube-api-access-fgz9n\") pod \"cluster-proxy-proxy-agent-784bf8c68f-6mtl9\" (UID: \"c4816a20-7b75-4aa1-a0f3-b366e10dba38\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.526036 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.525966 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-66czm\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-kube-api-access-66czm\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:04.526461 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.526438 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdb7b\" (UniqueName: \"kubernetes.io/projected/ed10247d-2496-4512-b5e3-24cf0aa60754-kube-api-access-vdb7b\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:04.537507 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.537491 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:47:04.537607 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.537511 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:47:04.537607 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.537491 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:47:04.540311 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.540162 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-b7dtp\"" Feb 17 12:47:04.540311 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.540179 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 17 12:47:04.540311 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.540213 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-cgwh6\"" Feb 17 12:47:04.540311 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.540215 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Feb 17 12:47:04.540311 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.540215 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 17 12:47:04.540311 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.540259 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 17 12:47:04.618290 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.618232 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.618379 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.618296 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/773c39a0-9562-40a2-a21c-4ae429c7b782-config-volume\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.618379 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.618350 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ggwfz\" (UniqueName: \"kubernetes.io/projected/773c39a0-9562-40a2-a21c-4ae429c7b782-kube-api-access-ggwfz\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.618502 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.618443 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/773c39a0-9562-40a2-a21c-4ae429c7b782-tmp-dir\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.618968 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:04.618951 2562 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 17 12:47:04.619046 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:04.619014 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls podName:773c39a0-9562-40a2-a21c-4ae429c7b782 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:05.118992227 +0000 UTC m=+56.093755615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls") pod "dns-default-p2fjr" (UID: "773c39a0-9562-40a2-a21c-4ae429c7b782") : secret "dns-default-metrics-tls" not found Feb 17 12:47:04.619721 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.619700 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/773c39a0-9562-40a2-a21c-4ae429c7b782-config-volume\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.620198 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.620172 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/773c39a0-9562-40a2-a21c-4ae429c7b782-tmp-dir\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.626806 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.626788 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggwfz\" (UniqueName: \"kubernetes.io/projected/773c39a0-9562-40a2-a21c-4ae429c7b782-kube-api-access-ggwfz\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:04.684824 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.684800 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:04.699100 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.699082 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" Feb 17 12:47:04.704673 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.704646 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:47:04.881003 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:04.880936 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm"] Feb 17 12:47:04.884326 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:47:04.884295 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8f05b6f_e392_41e6_8041_5c973696dbb5.slice/crio-29481bd9d9357237221ec4ff1aa6deeb75d93de6cf31ff41bcdbd09811d47b4a WatchSource:0}: Error finding container 29481bd9d9357237221ec4ff1aa6deeb75d93de6cf31ff41bcdbd09811d47b4a: Status 404 returned error can't find the container with id 29481bd9d9357237221ec4ff1aa6deeb75d93de6cf31ff41bcdbd09811d47b4a Feb 17 12:47:05.021955 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:05.021923 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:05.022072 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:05.021997 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:05.022072 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:05.022045 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:05.022186 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:05.022066 2562 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 17 12:47:05.022186 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:05.022122 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert podName:ebe280b5-2859-4a11-b980-08a9489a08c6 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:06.022106325 +0000 UTC m=+56.996869702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert") pod "networking-console-plugin-5b897fd5fc-9ppd9" (UID: "ebe280b5-2859-4a11-b980-08a9489a08c6") : secret "networking-console-plugin-cert" not found Feb 17 12:47:05.022186 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:05.022156 2562 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Feb 17 12:47:05.022186 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:05.022178 2562 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-799fd54bdd-lm8dz: secret "image-registry-tls" not found Feb 17 12:47:05.022336 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:05.022230 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls podName:5e0da837-f776-4a61-a361-e3a2c5f4a750 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:06.022217611 +0000 UTC m=+56.996980985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls") pod "image-registry-799fd54bdd-lm8dz" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750") : secret "image-registry-tls" not found Feb 17 12:47:05.022336 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:05.022159 2562 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 12:47:05.022336 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:05.022258 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert podName:ed10247d-2496-4512-b5e3-24cf0aa60754 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:06.022251371 +0000 UTC m=+56.997014743 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert") pod "ingress-canary-6hddw" (UID: "ed10247d-2496-4512-b5e3-24cf0aa60754") : secret "canary-serving-cert" not found Feb 17 12:47:05.088760 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:05.088736 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x"] Feb 17 12:47:05.091420 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:05.091398 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9"] Feb 17 12:47:05.091563 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:47:05.091544 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51aee5ab_5849_470c_af18_77488ddcced7.slice/crio-bc2ea99839d973042b16850ee9079226fad0cf9dde22ceba3728411dc1aa651b WatchSource:0}: Error finding container bc2ea99839d973042b16850ee9079226fad0cf9dde22ceba3728411dc1aa651b: Status 404 returned error can't find the container with id bc2ea99839d973042b16850ee9079226fad0cf9dde22ceba3728411dc1aa651b Feb 17 12:47:05.106153 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:47:05.106130 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4816a20_7b75_4aa1_a0f3_b366e10dba38.slice/crio-9bcb2e684793512a1ea20d4d697432fff0f8a728c43471d01dc34082437252a4 WatchSource:0}: Error finding container 9bcb2e684793512a1ea20d4d697432fff0f8a728c43471d01dc34082437252a4: Status 404 returned error can't find the container with id 9bcb2e684793512a1ea20d4d697432fff0f8a728c43471d01dc34082437252a4 Feb 17 12:47:05.123251 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:05.123233 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:05.123377 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:05.123361 2562 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 17 12:47:05.123419 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:05.123413 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls podName:773c39a0-9562-40a2-a21c-4ae429c7b782 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:06.123398787 +0000 UTC m=+57.098162161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls") pod "dns-default-p2fjr" (UID: "773c39a0-9562-40a2-a21c-4ae429c7b782") : secret "dns-default-metrics-tls" not found Feb 17 12:47:05.725563 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:05.725503 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" event={"ID":"c4816a20-7b75-4aa1-a0f3-b366e10dba38","Type":"ContainerStarted","Data":"9bcb2e684793512a1ea20d4d697432fff0f8a728c43471d01dc34082437252a4"} Feb 17 12:47:05.726989 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:05.726958 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" event={"ID":"c8f05b6f-e392-41e6-8041-5c973696dbb5","Type":"ContainerStarted","Data":"29481bd9d9357237221ec4ff1aa6deeb75d93de6cf31ff41bcdbd09811d47b4a"} Feb 17 12:47:05.729235 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:05.728995 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" event={"ID":"51aee5ab-5849-470c-af18-77488ddcced7","Type":"ContainerStarted","Data":"bc2ea99839d973042b16850ee9079226fad0cf9dde22ceba3728411dc1aa651b"} Feb 17 12:47:06.032564 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:06.031495 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:06.032564 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:06.031561 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:06.032564 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:06.031638 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:06.032564 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:06.031787 2562 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 17 12:47:06.032564 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:06.031853 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert podName:ebe280b5-2859-4a11-b980-08a9489a08c6 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:08.031832411 +0000 UTC m=+59.006595790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert") pod "networking-console-plugin-5b897fd5fc-9ppd9" (UID: "ebe280b5-2859-4a11-b980-08a9489a08c6") : secret "networking-console-plugin-cert" not found Feb 17 12:47:06.032564 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:06.032246 2562 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Feb 17 12:47:06.032564 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:06.032261 2562 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-799fd54bdd-lm8dz: secret "image-registry-tls" not found Feb 17 12:47:06.032564 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:06.032305 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls podName:5e0da837-f776-4a61-a361-e3a2c5f4a750 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:08.032290011 +0000 UTC m=+59.007053399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls") pod "image-registry-799fd54bdd-lm8dz" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750") : secret "image-registry-tls" not found Feb 17 12:47:06.032564 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:06.032368 2562 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 12:47:06.032564 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:06.032430 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert podName:ed10247d-2496-4512-b5e3-24cf0aa60754 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:08.03238859 +0000 UTC m=+59.007151972 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert") pod "ingress-canary-6hddw" (UID: "ed10247d-2496-4512-b5e3-24cf0aa60754") : secret "canary-serving-cert" not found Feb 17 12:47:06.132930 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:06.132896 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:06.133133 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:06.133087 2562 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 17 12:47:06.133209 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:06.133154 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls podName:773c39a0-9562-40a2-a21c-4ae429c7b782 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:08.133135231 +0000 UTC m=+59.107898619 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls") pod "dns-default-p2fjr" (UID: "773c39a0-9562-40a2-a21c-4ae429c7b782") : secret "dns-default-metrics-tls" not found Feb 17 12:47:07.685204 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:07.685176 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mhj7q" Feb 17 12:47:08.051933 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:08.051892 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:08.052204 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:08.052012 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:08.052204 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:08.052075 2562 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 12:47:08.052204 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:08.052090 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:08.052204 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:08.052146 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert podName:ed10247d-2496-4512-b5e3-24cf0aa60754 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:12.052124814 +0000 UTC m=+63.026888191 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert") pod "ingress-canary-6hddw" (UID: "ed10247d-2496-4512-b5e3-24cf0aa60754") : secret "canary-serving-cert" not found Feb 17 12:47:08.052204 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:08.052191 2562 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 17 12:47:08.052448 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:08.052254 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert podName:ebe280b5-2859-4a11-b980-08a9489a08c6 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:12.05223557 +0000 UTC m=+63.026998956 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert") pod "networking-console-plugin-5b897fd5fc-9ppd9" (UID: "ebe280b5-2859-4a11-b980-08a9489a08c6") : secret "networking-console-plugin-cert" not found Feb 17 12:47:08.052448 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:08.052196 2562 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Feb 17 12:47:08.052448 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:08.052279 2562 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-799fd54bdd-lm8dz: secret "image-registry-tls" not found Feb 17 12:47:08.052448 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:08.052315 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls podName:5e0da837-f776-4a61-a361-e3a2c5f4a750 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:12.052300864 +0000 UTC m=+63.027064270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls") pod "image-registry-799fd54bdd-lm8dz" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750") : secret "image-registry-tls" not found Feb 17 12:47:08.153931 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:08.153495 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:08.153931 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:08.153800 2562 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 17 12:47:08.153931 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:08.153896 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls podName:773c39a0-9562-40a2-a21c-4ae429c7b782 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:12.153875028 +0000 UTC m=+63.128638414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls") pod "dns-default-p2fjr" (UID: "773c39a0-9562-40a2-a21c-4ae429c7b782") : secret "dns-default-metrics-tls" not found Feb 17 12:47:10.740630 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:10.740595 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" event={"ID":"c4816a20-7b75-4aa1-a0f3-b366e10dba38","Type":"ContainerStarted","Data":"afe55253333614bbbcb2b09dacb8d4c662cae04495d8750d46a9be52976985d7"} Feb 17 12:47:10.741938 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:10.741912 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" event={"ID":"c8f05b6f-e392-41e6-8041-5c973696dbb5","Type":"ContainerStarted","Data":"bada9a0727b5ebcd978500164a18f162442f833c0cc929ea69908f4ce1f9ea68"} Feb 17 12:47:10.743390 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:10.743363 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" event={"ID":"51aee5ab-5849-470c-af18-77488ddcced7","Type":"ContainerStarted","Data":"a17c1705c6513cc8d872c514943c321770266fa4414bb85630855e91be0c934d"} Feb 17 12:47:10.743583 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:10.743565 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:10.745445 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:10.745427 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:47:10.757487 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:10.757433 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" podStartSLOduration=25.824149654 podStartE2EDuration="30.75742425s" podCreationTimestamp="2026-02-17 12:46:40 +0000 UTC" firstStartedPulling="2026-02-17 12:47:04.886054992 +0000 UTC m=+55.860818368" lastFinishedPulling="2026-02-17 12:47:09.81932958 +0000 UTC m=+60.794092964" observedRunningTime="2026-02-17 12:47:10.757198522 +0000 UTC m=+61.731961919" watchObservedRunningTime="2026-02-17 12:47:10.75742425 +0000 UTC m=+61.732187644" Feb 17 12:47:10.775148 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:10.775107 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" podStartSLOduration=26.028811654 podStartE2EDuration="30.775093204s" podCreationTimestamp="2026-02-17 12:46:40 +0000 UTC" firstStartedPulling="2026-02-17 12:47:05.093569476 +0000 UTC m=+56.068332849" lastFinishedPulling="2026-02-17 12:47:09.839851011 +0000 UTC m=+60.814614399" observedRunningTime="2026-02-17 12:47:10.773875295 +0000 UTC m=+61.748638685" watchObservedRunningTime="2026-02-17 12:47:10.775093204 +0000 UTC m=+61.749856600" Feb 17 12:47:12.081090 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:12.081057 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:12.081409 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:12.081203 2562 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 12:47:12.081409 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:12.081220 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:12.081409 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:12.081279 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert podName:ed10247d-2496-4512-b5e3-24cf0aa60754 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:20.081264604 +0000 UTC m=+71.056027981 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert") pod "ingress-canary-6hddw" (UID: "ed10247d-2496-4512-b5e3-24cf0aa60754") : secret "canary-serving-cert" not found Feb 17 12:47:12.081409 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:12.081313 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:12.081409 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:12.081323 2562 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 17 12:47:12.081409 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:12.081379 2562 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Feb 17 12:47:12.081409 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:12.081387 2562 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-799fd54bdd-lm8dz: secret "image-registry-tls" not found Feb 17 12:47:12.081409 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:12.081396 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert podName:ebe280b5-2859-4a11-b980-08a9489a08c6 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:20.081382492 +0000 UTC m=+71.056145867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert") pod "networking-console-plugin-5b897fd5fc-9ppd9" (UID: "ebe280b5-2859-4a11-b980-08a9489a08c6") : secret "networking-console-plugin-cert" not found Feb 17 12:47:12.081409 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:12.081410 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls podName:5e0da837-f776-4a61-a361-e3a2c5f4a750 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:20.08140224 +0000 UTC m=+71.056165613 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls") pod "image-registry-799fd54bdd-lm8dz" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750") : secret "image-registry-tls" not found Feb 17 12:47:12.181722 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:12.181656 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:12.181841 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:12.181779 2562 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 17 12:47:12.181917 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:12.181845 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls podName:773c39a0-9562-40a2-a21c-4ae429c7b782 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:20.181829388 +0000 UTC m=+71.156592766 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls") pod "dns-default-p2fjr" (UID: "773c39a0-9562-40a2-a21c-4ae429c7b782") : secret "dns-default-metrics-tls" not found Feb 17 12:47:12.748769 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:12.748735 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" event={"ID":"c4816a20-7b75-4aa1-a0f3-b366e10dba38","Type":"ContainerStarted","Data":"8479c3180aadb3e50114be0b10672eee9cd1a909be54f131f57227e54efaaee8"} Feb 17 12:47:12.748769 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:12.748771 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" event={"ID":"c4816a20-7b75-4aa1-a0f3-b366e10dba38","Type":"ContainerStarted","Data":"031662104daca5f078088aee0fb32aa68597287334067aca1860ce7ee4481967"} Feb 17 12:47:12.766836 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:12.766795 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" podStartSLOduration=26.020849295 podStartE2EDuration="32.766783616s" podCreationTimestamp="2026-02-17 12:46:40 +0000 UTC" firstStartedPulling="2026-02-17 12:47:05.107579261 +0000 UTC m=+56.082342634" lastFinishedPulling="2026-02-17 12:47:11.853513578 +0000 UTC m=+62.828276955" observedRunningTime="2026-02-17 12:47:12.764864837 +0000 UTC m=+63.739628232" watchObservedRunningTime="2026-02-17 12:47:12.766783616 +0000 UTC m=+63.741547011" Feb 17 12:47:14.296084 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:14.296045 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9x7z\" (UniqueName: \"kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z\") pod \"network-check-target-v7bfv\" (UID: \"89c079c8-3da1-4b99-a30a-3d749cf8f842\") " pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:47:14.299018 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:14.299000 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 17 12:47:14.309925 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:14.309908 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 17 12:47:14.319681 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:14.319654 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9x7z\" (UniqueName: \"kubernetes.io/projected/89c079c8-3da1-4b99-a30a-3d749cf8f842-kube-api-access-w9x7z\") pod \"network-check-target-v7bfv\" (UID: \"89c079c8-3da1-4b99-a30a-3d749cf8f842\") " pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:47:14.396996 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:14.396974 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:47:14.399573 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:14.399560 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 17 12:47:14.407415 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:14.407401 2562 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 17 12:47:14.407478 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:14.407457 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs podName:7bb13ada-375a-497d-aced-02307525f449 nodeName:}" failed. No retries permitted until 2026-02-17 12:48:18.407442324 +0000 UTC m=+129.382205697 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs") pod "network-metrics-daemon-j6r5z" (UID: "7bb13ada-375a-497d-aced-02307525f449") : secret "metrics-daemon-secret" not found Feb 17 12:47:14.456951 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:14.456930 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-cgwh6\"" Feb 17 12:47:14.464872 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:14.464855 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:47:14.573561 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:14.573498 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-v7bfv"] Feb 17 12:47:14.577920 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:47:14.577891 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89c079c8_3da1_4b99_a30a_3d749cf8f842.slice/crio-a2ffdf05624a80e8db90b4360b0c663dfff66e5a0c4e67b7852736526a54a77a WatchSource:0}: Error finding container a2ffdf05624a80e8db90b4360b0c663dfff66e5a0c4e67b7852736526a54a77a: Status 404 returned error can't find the container with id a2ffdf05624a80e8db90b4360b0c663dfff66e5a0c4e67b7852736526a54a77a Feb 17 12:47:14.753507 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:14.753462 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v7bfv" event={"ID":"89c079c8-3da1-4b99-a30a-3d749cf8f842","Type":"ContainerStarted","Data":"a2ffdf05624a80e8db90b4360b0c663dfff66e5a0c4e67b7852736526a54a77a"} Feb 17 12:47:17.760169 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:17.760138 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v7bfv" event={"ID":"89c079c8-3da1-4b99-a30a-3d749cf8f842","Type":"ContainerStarted","Data":"3fd3f6853456a4486f76a416d3ef12da03c58c48df5c668b954108d3b183cc7d"} Feb 17 12:47:17.760493 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:17.760252 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:47:17.775104 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:17.775058 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-v7bfv" podStartSLOduration=65.801144201 podStartE2EDuration="1m8.775044836s" podCreationTimestamp="2026-02-17 12:46:09 +0000 UTC" firstStartedPulling="2026-02-17 12:47:14.57955045 +0000 UTC m=+65.554313835" lastFinishedPulling="2026-02-17 12:47:17.553451098 +0000 UTC m=+68.528214470" observedRunningTime="2026-02-17 12:47:17.774069174 +0000 UTC m=+68.748832571" watchObservedRunningTime="2026-02-17 12:47:17.775044836 +0000 UTC m=+68.749808232" Feb 17 12:47:20.147962 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:20.147907 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:20.148396 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:20.147997 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:20.148396 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:20.148034 2562 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 17 12:47:20.148396 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:20.148105 2562 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 12:47:20.148396 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:20.148114 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert podName:ebe280b5-2859-4a11-b980-08a9489a08c6 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:36.148094981 +0000 UTC m=+87.122858369 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert") pod "networking-console-plugin-5b897fd5fc-9ppd9" (UID: "ebe280b5-2859-4a11-b980-08a9489a08c6") : secret "networking-console-plugin-cert" not found Feb 17 12:47:20.148396 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:20.148039 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:20.148396 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:20.148126 2562 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Feb 17 12:47:20.148396 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:20.148144 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert podName:ed10247d-2496-4512-b5e3-24cf0aa60754 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:36.148135055 +0000 UTC m=+87.122898442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert") pod "ingress-canary-6hddw" (UID: "ed10247d-2496-4512-b5e3-24cf0aa60754") : secret "canary-serving-cert" not found Feb 17 12:47:20.148396 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:20.148159 2562 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-799fd54bdd-lm8dz: secret "image-registry-tls" not found Feb 17 12:47:20.148396 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:20.148227 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls podName:5e0da837-f776-4a61-a361-e3a2c5f4a750 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:36.148211283 +0000 UTC m=+87.122974679 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls") pod "image-registry-799fd54bdd-lm8dz" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750") : secret "image-registry-tls" not found Feb 17 12:47:20.249111 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:20.249086 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:20.249239 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:20.249222 2562 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 17 12:47:20.249293 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:20.249284 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls podName:773c39a0-9562-40a2-a21c-4ae429c7b782 nodeName:}" failed. No retries permitted until 2026-02-17 12:47:36.249267729 +0000 UTC m=+87.224031120 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls") pod "dns-default-p2fjr" (UID: "773c39a0-9562-40a2-a21c-4ae429c7b782") : secret "dns-default-metrics-tls" not found Feb 17 12:47:21.054721 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:21.054668 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:47:21.057345 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:21.057328 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Feb 17 12:47:21.067497 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:21.067453 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f84c60a8-1cec-4e96-8e96-b52be431e4ed-original-pull-secret\") pod \"global-pull-secret-syncer-xk97p\" (UID: \"f84c60a8-1cec-4e96-8e96-b52be431e4ed\") " pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:47:21.359080 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:21.359003 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-xk97p" Feb 17 12:47:21.468986 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:21.468956 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-xk97p"] Feb 17 12:47:21.471792 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:47:21.471764 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84c60a8_1cec_4e96_8e96_b52be431e4ed.slice/crio-12ef7cdbce6e8e2801b290eb0242967fc6573a2207be8666f4961060574f26e1 WatchSource:0}: Error finding container 12ef7cdbce6e8e2801b290eb0242967fc6573a2207be8666f4961060574f26e1: Status 404 returned error can't find the container with id 12ef7cdbce6e8e2801b290eb0242967fc6573a2207be8666f4961060574f26e1 Feb 17 12:47:21.771424 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:21.771396 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-xk97p" event={"ID":"f84c60a8-1cec-4e96-8e96-b52be431e4ed","Type":"ContainerStarted","Data":"12ef7cdbce6e8e2801b290eb0242967fc6573a2207be8666f4961060574f26e1"} Feb 17 12:47:25.784616 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:25.784582 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-xk97p" event={"ID":"f84c60a8-1cec-4e96-8e96-b52be431e4ed","Type":"ContainerStarted","Data":"40e332607aa33847cc41b0fa012bf35ad0bce5e55670c26a91162a89293c7f8f"} Feb 17 12:47:25.799444 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:25.799399 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-xk97p" podStartSLOduration=65.437360251 podStartE2EDuration="1m8.79938839s" podCreationTimestamp="2026-02-17 12:46:17 +0000 UTC" firstStartedPulling="2026-02-17 12:47:21.473541987 +0000 UTC m=+72.448305365" lastFinishedPulling="2026-02-17 12:47:24.835570128 +0000 UTC m=+75.810333504" observedRunningTime="2026-02-17 12:47:25.798834503 +0000 UTC m=+76.773597898" watchObservedRunningTime="2026-02-17 12:47:25.79938839 +0000 UTC m=+76.774151820" Feb 17 12:47:36.170052 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:36.169903 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:47:36.170052 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:36.169956 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:47:36.170052 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:36.169990 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:47:36.170568 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:36.170056 2562 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 17 12:47:36.170568 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:36.170093 2562 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 12:47:36.170568 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:36.170097 2562 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Feb 17 12:47:36.170568 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:36.170115 2562 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-799fd54bdd-lm8dz: secret "image-registry-tls" not found Feb 17 12:47:36.170568 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:36.170142 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert podName:ebe280b5-2859-4a11-b980-08a9489a08c6 nodeName:}" failed. No retries permitted until 2026-02-17 12:48:08.170119189 +0000 UTC m=+119.144882563 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert") pod "networking-console-plugin-5b897fd5fc-9ppd9" (UID: "ebe280b5-2859-4a11-b980-08a9489a08c6") : secret "networking-console-plugin-cert" not found Feb 17 12:47:36.170568 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:36.170163 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert podName:ed10247d-2496-4512-b5e3-24cf0aa60754 nodeName:}" failed. No retries permitted until 2026-02-17 12:48:08.170154479 +0000 UTC m=+119.144917858 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert") pod "ingress-canary-6hddw" (UID: "ed10247d-2496-4512-b5e3-24cf0aa60754") : secret "canary-serving-cert" not found Feb 17 12:47:36.170568 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:36.170181 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls podName:5e0da837-f776-4a61-a361-e3a2c5f4a750 nodeName:}" failed. No retries permitted until 2026-02-17 12:48:08.170170826 +0000 UTC m=+119.144934200 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls") pod "image-registry-799fd54bdd-lm8dz" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750") : secret "image-registry-tls" not found Feb 17 12:47:36.270804 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:36.270771 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:47:36.270958 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:36.270907 2562 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 17 12:47:36.271003 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:47:36.270972 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls podName:773c39a0-9562-40a2-a21c-4ae429c7b782 nodeName:}" failed. No retries permitted until 2026-02-17 12:48:08.270957658 +0000 UTC m=+119.245721032 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls") pod "dns-default-p2fjr" (UID: "773c39a0-9562-40a2-a21c-4ae429c7b782") : secret "dns-default-metrics-tls" not found Feb 17 12:47:48.766191 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:47:48.766159 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-v7bfv" Feb 17 12:48:08.190512 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:48:08.190456 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:48:08.190903 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:48:08.190530 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:48:08.190903 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:08.190567 2562 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Feb 17 12:48:08.190903 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:48:08.190578 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:48:08.190903 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:08.190584 2562 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-799fd54bdd-lm8dz: secret "image-registry-tls" not found Feb 17 12:48:08.190903 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:08.190642 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls podName:5e0da837-f776-4a61-a361-e3a2c5f4a750 nodeName:}" failed. No retries permitted until 2026-02-17 12:49:12.190627033 +0000 UTC m=+183.165390407 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls") pod "image-registry-799fd54bdd-lm8dz" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750") : secret "image-registry-tls" not found Feb 17 12:48:08.190903 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:08.190678 2562 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 17 12:48:08.190903 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:08.190730 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert podName:ebe280b5-2859-4a11-b980-08a9489a08c6 nodeName:}" failed. No retries permitted until 2026-02-17 12:49:12.190717849 +0000 UTC m=+183.165481221 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert") pod "networking-console-plugin-5b897fd5fc-9ppd9" (UID: "ebe280b5-2859-4a11-b980-08a9489a08c6") : secret "networking-console-plugin-cert" not found Feb 17 12:48:08.190903 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:08.190678 2562 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 17 12:48:08.190903 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:08.190756 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert podName:ed10247d-2496-4512-b5e3-24cf0aa60754 nodeName:}" failed. No retries permitted until 2026-02-17 12:49:12.190750479 +0000 UTC m=+183.165513852 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert") pod "ingress-canary-6hddw" (UID: "ed10247d-2496-4512-b5e3-24cf0aa60754") : secret "canary-serving-cert" not found Feb 17 12:48:08.291455 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:48:08.291429 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:48:08.291600 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:08.291550 2562 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 17 12:48:08.291600 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:08.291597 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls podName:773c39a0-9562-40a2-a21c-4ae429c7b782 nodeName:}" failed. No retries permitted until 2026-02-17 12:49:12.291584307 +0000 UTC m=+183.266347685 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls") pod "dns-default-p2fjr" (UID: "773c39a0-9562-40a2-a21c-4ae429c7b782") : secret "dns-default-metrics-tls" not found Feb 17 12:48:18.462561 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:48:18.462525 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:48:18.462987 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:18.462663 2562 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 17 12:48:18.462987 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:48:18.462719 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs podName:7bb13ada-375a-497d-aced-02307525f449 nodeName:}" failed. No retries permitted until 2026-02-17 12:50:20.46270658 +0000 UTC m=+251.437469953 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs") pod "network-metrics-daemon-j6r5z" (UID: "7bb13ada-375a-497d-aced-02307525f449") : secret "metrics-daemon-secret" not found Feb 17 12:48:58.770719 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:48:58.770646 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-h4hcc_d9f14b23-72e9-4631-b0c5-d568aca52c29/dns-node-resolver/0.log" Feb 17 12:48:59.572746 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:48:59.572717 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-7rjvn_9f08688a-aac4-4adb-b2bd-90ffe60387e3/node-ca/0.log" Feb 17 12:49:07.367586 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:49:07.367543 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" podUID="ebe280b5-2859-4a11-b980-08a9489a08c6" Feb 17 12:49:07.376701 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:49:07.376672 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" podUID="5e0da837-f776-4a61-a361-e3a2c5f4a750" Feb 17 12:49:07.420068 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:49:07.420044 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-6hddw" podUID="ed10247d-2496-4512-b5e3-24cf0aa60754" Feb 17 12:49:07.479291 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:49:07.479249 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-p2fjr" podUID="773c39a0-9562-40a2-a21c-4ae429c7b782" Feb 17 12:49:07.548998 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:49:07.548970 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-j6r5z" podUID="7bb13ada-375a-497d-aced-02307525f449" Feb 17 12:49:08.017004 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:08.016978 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:49:08.017004 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:08.016989 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:49:08.017228 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:08.016988 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p2fjr" Feb 17 12:49:08.017228 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:08.016992 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:49:10.022637 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:10.022610 2562 generic.go:358] "Generic (PLEG): container finished" podID="c8f05b6f-e392-41e6-8041-5c973696dbb5" containerID="bada9a0727b5ebcd978500164a18f162442f833c0cc929ea69908f4ce1f9ea68" exitCode=255 Feb 17 12:49:10.022962 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:10.022680 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" event={"ID":"c8f05b6f-e392-41e6-8041-5c973696dbb5","Type":"ContainerDied","Data":"bada9a0727b5ebcd978500164a18f162442f833c0cc929ea69908f4ce1f9ea68"} Feb 17 12:49:10.023039 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:10.023022 2562 scope.go:117] "RemoveContainer" containerID="bada9a0727b5ebcd978500164a18f162442f833c0cc929ea69908f4ce1f9ea68" Feb 17 12:49:10.023871 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:10.023847 2562 generic.go:358] "Generic (PLEG): container finished" podID="51aee5ab-5849-470c-af18-77488ddcced7" containerID="a17c1705c6513cc8d872c514943c321770266fa4414bb85630855e91be0c934d" exitCode=1 Feb 17 12:49:10.023937 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:10.023889 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" event={"ID":"51aee5ab-5849-470c-af18-77488ddcced7","Type":"ContainerDied","Data":"a17c1705c6513cc8d872c514943c321770266fa4414bb85630855e91be0c934d"} Feb 17 12:49:10.024156 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:10.024144 2562 scope.go:117] "RemoveContainer" containerID="a17c1705c6513cc8d872c514943c321770266fa4414bb85630855e91be0c934d" Feb 17 12:49:10.743819 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:10.743787 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:49:11.027598 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:11.027507 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-6b9d458cff-gkddm" event={"ID":"c8f05b6f-e392-41e6-8041-5c973696dbb5","Type":"ContainerStarted","Data":"9757fbfb94488a63dad395e756a8ef061a187283c51e9ecdba49d0959c2024c9"} Feb 17 12:49:11.028993 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:11.028969 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" event={"ID":"51aee5ab-5849-470c-af18-77488ddcced7","Type":"ContainerStarted","Data":"48c97a89c20875676bb53660f4a7492a634a1a692ea2636c80127b65902a14a4"} Feb 17 12:49:11.029203 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:11.029185 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:49:11.029767 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:11.029748 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5c97f697b9-dxl5x" Feb 17 12:49:12.251759 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.251722 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:49:12.252125 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.251787 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:49:12.252125 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.251818 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:49:12.254137 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.254111 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ebe280b5-2859-4a11-b980-08a9489a08c6-networking-console-plugin-cert\") pod \"networking-console-plugin-5b897fd5fc-9ppd9\" (UID: \"ebe280b5-2859-4a11-b980-08a9489a08c6\") " pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:49:12.254248 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.254156 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"image-registry-799fd54bdd-lm8dz\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:49:12.254688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.254666 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed10247d-2496-4512-b5e3-24cf0aa60754-cert\") pod \"ingress-canary-6hddw\" (UID: \"ed10247d-2496-4512-b5e3-24cf0aa60754\") " pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:49:12.352336 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.352302 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:49:12.354688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.354672 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/773c39a0-9562-40a2-a21c-4ae429c7b782-metrics-tls\") pod \"dns-default-p2fjr\" (UID: \"773c39a0-9562-40a2-a21c-4ae429c7b782\") " pod="openshift-dns/dns-default-p2fjr" Feb 17 12:49:12.522884 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.522817 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-68g2c\"" Feb 17 12:49:12.522884 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.522834 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9qj8f\"" Feb 17 12:49:12.522884 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.522861 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-95vn7\"" Feb 17 12:49:12.523109 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.522930 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-944wh\"" Feb 17 12:49:12.528531 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.528512 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p2fjr" Feb 17 12:49:12.528628 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.528578 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" Feb 17 12:49:12.528628 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.528584 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6hddw" Feb 17 12:49:12.528628 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.528608 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:49:12.685776 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.685744 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-799fd54bdd-lm8dz"] Feb 17 12:49:12.688641 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:49:12.688611 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e0da837_f776_4a61_a361_e3a2c5f4a750.slice/crio-6768f664e678d9c7bac3f3c312a29cec2e1411e100974103f657803a0ec191cb WatchSource:0}: Error finding container 6768f664e678d9c7bac3f3c312a29cec2e1411e100974103f657803a0ec191cb: Status 404 returned error can't find the container with id 6768f664e678d9c7bac3f3c312a29cec2e1411e100974103f657803a0ec191cb Feb 17 12:49:12.906359 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.906290 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6hddw"] Feb 17 12:49:12.910099 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.910048 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9"] Feb 17 12:49:12.910398 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:49:12.910356 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded10247d_2496_4512_b5e3_24cf0aa60754.slice/crio-e822e859268a5e2ec2403201994b003093c6b4b40db7e02ca0fa38ea40f4f6ae WatchSource:0}: Error finding container e822e859268a5e2ec2403201994b003093c6b4b40db7e02ca0fa38ea40f4f6ae: Status 404 returned error can't find the container with id e822e859268a5e2ec2403201994b003093c6b4b40db7e02ca0fa38ea40f4f6ae Feb 17 12:49:12.911114 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:12.911067 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p2fjr"] Feb 17 12:49:12.913712 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:49:12.913691 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod773c39a0_9562_40a2_a21c_4ae429c7b782.slice/crio-be02c20ea7e44a13f126b9f8446fc583ea23336303a5c1ce6536be54e5f5b2f9 WatchSource:0}: Error finding container be02c20ea7e44a13f126b9f8446fc583ea23336303a5c1ce6536be54e5f5b2f9: Status 404 returned error can't find the container with id be02c20ea7e44a13f126b9f8446fc583ea23336303a5c1ce6536be54e5f5b2f9 Feb 17 12:49:12.914767 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:49:12.914716 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebe280b5_2859_4a11_b980_08a9489a08c6.slice/crio-3c0b54b1d74a227aa119bae7011f0c193025d2e7ea1028d706ad2d97915ba491 WatchSource:0}: Error finding container 3c0b54b1d74a227aa119bae7011f0c193025d2e7ea1028d706ad2d97915ba491: Status 404 returned error can't find the container with id 3c0b54b1d74a227aa119bae7011f0c193025d2e7ea1028d706ad2d97915ba491 Feb 17 12:49:13.034726 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:13.034695 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p2fjr" event={"ID":"773c39a0-9562-40a2-a21c-4ae429c7b782","Type":"ContainerStarted","Data":"be02c20ea7e44a13f126b9f8446fc583ea23336303a5c1ce6536be54e5f5b2f9"} Feb 17 12:49:13.035685 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:13.035651 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6hddw" event={"ID":"ed10247d-2496-4512-b5e3-24cf0aa60754","Type":"ContainerStarted","Data":"e822e859268a5e2ec2403201994b003093c6b4b40db7e02ca0fa38ea40f4f6ae"} Feb 17 12:49:13.036924 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:13.036900 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" event={"ID":"5e0da837-f776-4a61-a361-e3a2c5f4a750","Type":"ContainerStarted","Data":"b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a"} Feb 17 12:49:13.037021 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:13.036928 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" event={"ID":"5e0da837-f776-4a61-a361-e3a2c5f4a750","Type":"ContainerStarted","Data":"6768f664e678d9c7bac3f3c312a29cec2e1411e100974103f657803a0ec191cb"} Feb 17 12:49:13.037021 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:13.037012 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:49:13.037985 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:13.037960 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" event={"ID":"ebe280b5-2859-4a11-b980-08a9489a08c6","Type":"ContainerStarted","Data":"3c0b54b1d74a227aa119bae7011f0c193025d2e7ea1028d706ad2d97915ba491"} Feb 17 12:49:13.055528 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:13.055489 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" podStartSLOduration=183.055463391 podStartE2EDuration="3m3.055463391s" podCreationTimestamp="2026-02-17 12:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 12:49:13.054562529 +0000 UTC m=+184.029325914" watchObservedRunningTime="2026-02-17 12:49:13.055463391 +0000 UTC m=+184.030226785" Feb 17 12:49:15.045675 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:15.045643 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6hddw" event={"ID":"ed10247d-2496-4512-b5e3-24cf0aa60754","Type":"ContainerStarted","Data":"dd00bc2b4558f2ac2ceccf86c53be45accd3dec7e26c0b2b3838235f463ee372"} Feb 17 12:49:15.048623 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:15.048318 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" event={"ID":"ebe280b5-2859-4a11-b980-08a9489a08c6","Type":"ContainerStarted","Data":"b9494b54631d65b3c0f45a67a04572cc3c0d0aa0e600f3a3a4c8745af284cd80"} Feb 17 12:49:15.051430 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:15.051401 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p2fjr" event={"ID":"773c39a0-9562-40a2-a21c-4ae429c7b782","Type":"ContainerStarted","Data":"a93facbe1f4fa4c8d68283ecb0f99d94446a1574a1b92afece130e5bb023d6fe"} Feb 17 12:49:15.063764 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:15.063563 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6hddw" podStartSLOduration=129.079600608 podStartE2EDuration="2m11.063547016s" podCreationTimestamp="2026-02-17 12:47:04 +0000 UTC" firstStartedPulling="2026-02-17 12:49:12.912399212 +0000 UTC m=+183.887162585" lastFinishedPulling="2026-02-17 12:49:14.896345609 +0000 UTC m=+185.871108993" observedRunningTime="2026-02-17 12:49:15.062665406 +0000 UTC m=+186.037428802" watchObservedRunningTime="2026-02-17 12:49:15.063547016 +0000 UTC m=+186.038310412" Feb 17 12:49:15.091084 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:15.091003 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5b897fd5fc-9ppd9" podStartSLOduration=183.113859158 podStartE2EDuration="3m5.090971096s" podCreationTimestamp="2026-02-17 12:46:10 +0000 UTC" firstStartedPulling="2026-02-17 12:49:12.916713646 +0000 UTC m=+183.891477019" lastFinishedPulling="2026-02-17 12:49:14.893825585 +0000 UTC m=+185.868588957" observedRunningTime="2026-02-17 12:49:15.090836405 +0000 UTC m=+186.065599801" watchObservedRunningTime="2026-02-17 12:49:15.090971096 +0000 UTC m=+186.065734512" Feb 17 12:49:16.057941 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:16.057896 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p2fjr" event={"ID":"773c39a0-9562-40a2-a21c-4ae429c7b782","Type":"ContainerStarted","Data":"635155b51db108ecdb51ef6a3ebc8309b9c6c3b9a3b161630a968ff5d3793458"} Feb 17 12:49:16.074412 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:16.074366 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-p2fjr" podStartSLOduration=130.092490396 podStartE2EDuration="2m12.074352338s" podCreationTimestamp="2026-02-17 12:47:04 +0000 UTC" firstStartedPulling="2026-02-17 12:49:12.915659313 +0000 UTC m=+183.890422693" lastFinishedPulling="2026-02-17 12:49:14.897521256 +0000 UTC m=+185.872284635" observedRunningTime="2026-02-17 12:49:16.074019757 +0000 UTC m=+187.048783153" watchObservedRunningTime="2026-02-17 12:49:16.074352338 +0000 UTC m=+187.049115732" Feb 17 12:49:17.061307 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.061274 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-p2fjr" Feb 17 12:49:17.414287 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.414206 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-5nsdw"] Feb 17 12:49:17.417403 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.417388 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.420029 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.420007 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Feb 17 12:49:17.420029 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.420011 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Feb 17 12:49:17.421536 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.421519 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Feb 17 12:49:17.421626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.421550 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Feb 17 12:49:17.421626 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.421553 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-hv7tp\"" Feb 17 12:49:17.430263 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.430236 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-5nsdw"] Feb 17 12:49:17.589105 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.589074 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/404bc608-9e88-47b5-a591-0fea2d2e6db0-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.589252 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.589115 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d9vq\" (UniqueName: \"kubernetes.io/projected/404bc608-9e88-47b5-a591-0fea2d2e6db0-kube-api-access-6d9vq\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.589252 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.589134 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/404bc608-9e88-47b5-a591-0fea2d2e6db0-data-volume\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.589252 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.589190 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/404bc608-9e88-47b5-a591-0fea2d2e6db0-crio-socket\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.589364 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.589254 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/404bc608-9e88-47b5-a591-0fea2d2e6db0-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.689841 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.689781 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/404bc608-9e88-47b5-a591-0fea2d2e6db0-crio-socket\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.689841 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.689812 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/404bc608-9e88-47b5-a591-0fea2d2e6db0-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.690017 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.689851 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/404bc608-9e88-47b5-a591-0fea2d2e6db0-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.690017 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.689880 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6d9vq\" (UniqueName: \"kubernetes.io/projected/404bc608-9e88-47b5-a591-0fea2d2e6db0-kube-api-access-6d9vq\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.690017 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.689892 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/404bc608-9e88-47b5-a591-0fea2d2e6db0-crio-socket\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.690017 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.689896 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/404bc608-9e88-47b5-a591-0fea2d2e6db0-data-volume\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.690272 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.690248 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/404bc608-9e88-47b5-a591-0fea2d2e6db0-data-volume\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.690386 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.690339 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/404bc608-9e88-47b5-a591-0fea2d2e6db0-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.691992 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.691972 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/404bc608-9e88-47b5-a591-0fea2d2e6db0-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.698857 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.698836 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d9vq\" (UniqueName: \"kubernetes.io/projected/404bc608-9e88-47b5-a591-0fea2d2e6db0-kube-api-access-6d9vq\") pod \"insights-runtime-extractor-5nsdw\" (UID: \"404bc608-9e88-47b5-a591-0fea2d2e6db0\") " pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.727419 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.727390 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-5nsdw" Feb 17 12:49:17.838049 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:17.838020 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-5nsdw"] Feb 17 12:49:17.840878 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:49:17.840841 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod404bc608_9e88_47b5_a591_0fea2d2e6db0.slice/crio-a1b62b3b37593288a7a5107d8bd7b3667030a6165f9e0aa80684efff0f27a3ac WatchSource:0}: Error finding container a1b62b3b37593288a7a5107d8bd7b3667030a6165f9e0aa80684efff0f27a3ac: Status 404 returned error can't find the container with id a1b62b3b37593288a7a5107d8bd7b3667030a6165f9e0aa80684efff0f27a3ac Feb 17 12:49:18.064718 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:18.064689 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-5nsdw" event={"ID":"404bc608-9e88-47b5-a591-0fea2d2e6db0","Type":"ContainerStarted","Data":"273aba6265f465343356d907b0b0c3adb9881c1b3fb235267883e0e2d74264c7"} Feb 17 12:49:18.065041 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:18.064728 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-5nsdw" event={"ID":"404bc608-9e88-47b5-a591-0fea2d2e6db0","Type":"ContainerStarted","Data":"a1b62b3b37593288a7a5107d8bd7b3667030a6165f9e0aa80684efff0f27a3ac"} Feb 17 12:49:19.068104 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:19.068071 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-5nsdw" event={"ID":"404bc608-9e88-47b5-a591-0fea2d2e6db0","Type":"ContainerStarted","Data":"ca6b3cad3b721bc0e39dbc4117fb480f73a7e6f8877772ffb8eea6948f15875b"} Feb 17 12:49:20.536577 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:20.536540 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:49:21.074398 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:21.074367 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-5nsdw" event={"ID":"404bc608-9e88-47b5-a591-0fea2d2e6db0","Type":"ContainerStarted","Data":"7551553e21e5af2e65e36bd3129d2985c22faa6e777ec61f3b3dca28e095e99f"} Feb 17 12:49:21.091419 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:21.091375 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-5nsdw" podStartSLOduration=1.950628398 podStartE2EDuration="4.091363696s" podCreationTimestamp="2026-02-17 12:49:17 +0000 UTC" firstStartedPulling="2026-02-17 12:49:17.893622111 +0000 UTC m=+188.868385484" lastFinishedPulling="2026-02-17 12:49:20.034357403 +0000 UTC m=+191.009120782" observedRunningTime="2026-02-17 12:49:21.090575363 +0000 UTC m=+192.065338758" watchObservedRunningTime="2026-02-17 12:49:21.091363696 +0000 UTC m=+192.066127091" Feb 17 12:49:27.067028 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:27.066999 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-p2fjr" Feb 17 12:49:32.532688 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.532653 2562 patch_prober.go:28] interesting pod/image-registry-799fd54bdd-lm8dz container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Feb 17 12:49:32.533044 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.532706 2562 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" podUID="5e0da837-f776-4a61-a361-e3a2c5f4a750" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 12:49:32.910951 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.910865 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-sxsff"] Feb 17 12:49:32.913906 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.913885 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:32.916526 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.916500 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Feb 17 12:49:32.917000 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.916979 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-4ggjd\"" Feb 17 12:49:32.917246 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.917228 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Feb 17 12:49:32.917398 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.917383 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Feb 17 12:49:32.918460 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.918438 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Feb 17 12:49:32.918609 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.918585 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Feb 17 12:49:32.919538 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:32.919518 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Feb 17 12:49:33.093162 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.093134 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-textfile\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.093288 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.093168 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-wtmp\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.093288 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.093190 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-accelerators-collector-config\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.093288 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.093267 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-metrics-client-ca\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.093386 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.093301 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-tls\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.093386 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.093321 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-sys\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.093386 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.093375 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz6c5\" (UniqueName: \"kubernetes.io/projected/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-kube-api-access-rz6c5\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.093496 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.093405 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.093496 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.093433 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-root\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194488 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194403 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-wtmp\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194488 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194458 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-accelerators-collector-config\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194679 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194500 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-metrics-client-ca\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194679 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194520 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-tls\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194679 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194537 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-sys\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194679 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194567 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rz6c5\" (UniqueName: \"kubernetes.io/projected/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-kube-api-access-rz6c5\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194679 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194571 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-wtmp\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194679 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194595 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194679 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194621 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-root\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194679 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194663 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-sys\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194982 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194673 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-textfile\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194982 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:49:33.194695 2562 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 17 12:49:33.194982 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:49:33.194772 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-tls podName:30b1e3ce-3b31-4b13-aeb2-3d6818e33206 nodeName:}" failed. No retries permitted until 2026-02-17 12:49:33.694750901 +0000 UTC m=+204.669514273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-tls") pod "node-exporter-sxsff" (UID: "30b1e3ce-3b31-4b13-aeb2-3d6818e33206") : secret "node-exporter-tls" not found Feb 17 12:49:33.194982 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194771 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-root\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.194982 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.194915 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-textfile\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.195147 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.195122 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-accelerators-collector-config\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.195180 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.195156 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-metrics-client-ca\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.196890 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.196870 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.203439 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.203420 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz6c5\" (UniqueName: \"kubernetes.io/projected/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-kube-api-access-rz6c5\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.700005 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.699977 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-tls\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.702142 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.702113 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/30b1e3ce-3b31-4b13-aeb2-3d6818e33206-node-exporter-tls\") pod \"node-exporter-sxsff\" (UID: \"30b1e3ce-3b31-4b13-aeb2-3d6818e33206\") " pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.827372 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:33.827349 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-sxsff" Feb 17 12:49:33.836649 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:49:33.836622 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30b1e3ce_3b31_4b13_aeb2_3d6818e33206.slice/crio-bc72699f9282be273db87eaabcf7b478521816a6c43849359b475b1f6376a1d0 WatchSource:0}: Error finding container bc72699f9282be273db87eaabcf7b478521816a6c43849359b475b1f6376a1d0: Status 404 returned error can't find the container with id bc72699f9282be273db87eaabcf7b478521816a6c43849359b475b1f6376a1d0 Feb 17 12:49:34.045222 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:34.045196 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:49:34.107063 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:34.107035 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-sxsff" event={"ID":"30b1e3ce-3b31-4b13-aeb2-3d6818e33206","Type":"ContainerStarted","Data":"bc72699f9282be273db87eaabcf7b478521816a6c43849359b475b1f6376a1d0"} Feb 17 12:49:35.110486 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:35.110443 2562 generic.go:358] "Generic (PLEG): container finished" podID="30b1e3ce-3b31-4b13-aeb2-3d6818e33206" containerID="700eda4f194813cc0481fc23069189a72c176322274d3f2e16230f29103ae417" exitCode=0 Feb 17 12:49:35.110839 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:35.110500 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-sxsff" event={"ID":"30b1e3ce-3b31-4b13-aeb2-3d6818e33206","Type":"ContainerDied","Data":"700eda4f194813cc0481fc23069189a72c176322274d3f2e16230f29103ae417"} Feb 17 12:49:36.114780 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:36.114748 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-sxsff" event={"ID":"30b1e3ce-3b31-4b13-aeb2-3d6818e33206","Type":"ContainerStarted","Data":"674199129c7ebef4ad828440eec7ed1dab284089db7f80c8867a76ed3d0fc895"} Feb 17 12:49:36.114780 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:36.114782 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-sxsff" event={"ID":"30b1e3ce-3b31-4b13-aeb2-3d6818e33206","Type":"ContainerStarted","Data":"7fd8357dab347c4aec16f2e567cc4d3b7576db76a354669f61de15d42db5c1a3"} Feb 17 12:49:36.134723 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:36.134666 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-sxsff" podStartSLOduration=3.507817271 podStartE2EDuration="4.134653146s" podCreationTimestamp="2026-02-17 12:49:32 +0000 UTC" firstStartedPulling="2026-02-17 12:49:33.838456693 +0000 UTC m=+204.813220066" lastFinishedPulling="2026-02-17 12:49:34.465292561 +0000 UTC m=+205.440055941" observedRunningTime="2026-02-17 12:49:36.133484433 +0000 UTC m=+207.108247828" watchObservedRunningTime="2026-02-17 12:49:36.134653146 +0000 UTC m=+207.109416541" Feb 17 12:49:39.589800 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:49:39.589773 2562 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-799fd54bdd-lm8dz"] Feb 17 12:50:04.608097 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.608043 2562 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" podUID="5e0da837-f776-4a61-a361-e3a2c5f4a750" containerName="registry" containerID="cri-o://b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a" gracePeriod=30 Feb 17 12:50:04.705926 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.705884 2562 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" podUID="c4816a20-7b75-4aa1-a0f3-b366e10dba38" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 12:50:04.855273 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.855250 2562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:50:04.912368 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.912312 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-bound-sa-token\") pod \"5e0da837-f776-4a61-a361-e3a2c5f4a750\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " Feb 17 12:50:04.912368 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.912348 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") pod \"5e0da837-f776-4a61-a361-e3a2c5f4a750\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " Feb 17 12:50:04.912368 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.912364 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66czm\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-kube-api-access-66czm\") pod \"5e0da837-f776-4a61-a361-e3a2c5f4a750\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " Feb 17 12:50:04.912553 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.912399 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-trusted-ca\") pod \"5e0da837-f776-4a61-a361-e3a2c5f4a750\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " Feb 17 12:50:04.912553 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.912525 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e0da837-f776-4a61-a361-e3a2c5f4a750-ca-trust-extracted\") pod \"5e0da837-f776-4a61-a361-e3a2c5f4a750\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " Feb 17 12:50:04.912634 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.912566 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-certificates\") pod \"5e0da837-f776-4a61-a361-e3a2c5f4a750\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " Feb 17 12:50:04.912634 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.912609 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-installation-pull-secrets\") pod \"5e0da837-f776-4a61-a361-e3a2c5f4a750\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " Feb 17 12:50:04.912735 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.912639 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-image-registry-private-configuration\") pod \"5e0da837-f776-4a61-a361-e3a2c5f4a750\" (UID: \"5e0da837-f776-4a61-a361-e3a2c5f4a750\") " Feb 17 12:50:04.912808 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.912787 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5e0da837-f776-4a61-a361-e3a2c5f4a750" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 12:50:04.912935 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.912915 2562 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-trusted-ca\") on node \"ip-10-0-132-113.ec2.internal\" DevicePath \"\"" Feb 17 12:50:04.913076 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.913040 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5e0da837-f776-4a61-a361-e3a2c5f4a750" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 12:50:04.914914 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.914861 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5e0da837-f776-4a61-a361-e3a2c5f4a750" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 12:50:04.914914 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.914894 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5e0da837-f776-4a61-a361-e3a2c5f4a750" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 12:50:04.915178 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.915150 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-kube-api-access-66czm" (OuterVolumeSpecName: "kube-api-access-66czm") pod "5e0da837-f776-4a61-a361-e3a2c5f4a750" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750"). InnerVolumeSpecName "kube-api-access-66czm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 12:50:04.915178 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.915163 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "5e0da837-f776-4a61-a361-e3a2c5f4a750" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 12:50:04.915277 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.915207 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5e0da837-f776-4a61-a361-e3a2c5f4a750" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 12:50:04.920905 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:04.920882 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0da837-f776-4a61-a361-e3a2c5f4a750-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5e0da837-f776-4a61-a361-e3a2c5f4a750" (UID: "5e0da837-f776-4a61-a361-e3a2c5f4a750"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 12:50:05.013904 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.013882 2562 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-bound-sa-token\") on node \"ip-10-0-132-113.ec2.internal\" DevicePath \"\"" Feb 17 12:50:05.013904 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.013903 2562 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-tls\") on node \"ip-10-0-132-113.ec2.internal\" DevicePath \"\"" Feb 17 12:50:05.014031 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.013913 2562 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-66czm\" (UniqueName: \"kubernetes.io/projected/5e0da837-f776-4a61-a361-e3a2c5f4a750-kube-api-access-66czm\") on node \"ip-10-0-132-113.ec2.internal\" DevicePath \"\"" Feb 17 12:50:05.014031 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.013922 2562 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e0da837-f776-4a61-a361-e3a2c5f4a750-ca-trust-extracted\") on node \"ip-10-0-132-113.ec2.internal\" DevicePath \"\"" Feb 17 12:50:05.014031 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.013931 2562 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e0da837-f776-4a61-a361-e3a2c5f4a750-registry-certificates\") on node \"ip-10-0-132-113.ec2.internal\" DevicePath \"\"" Feb 17 12:50:05.014031 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.013940 2562 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-installation-pull-secrets\") on node \"ip-10-0-132-113.ec2.internal\" DevicePath \"\"" Feb 17 12:50:05.014031 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.013950 2562 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/5e0da837-f776-4a61-a361-e3a2c5f4a750-image-registry-private-configuration\") on node \"ip-10-0-132-113.ec2.internal\" DevicePath \"\"" Feb 17 12:50:05.185317 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.185260 2562 generic.go:358] "Generic (PLEG): container finished" podID="5e0da837-f776-4a61-a361-e3a2c5f4a750" containerID="b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a" exitCode=0 Feb 17 12:50:05.185317 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.185299 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" event={"ID":"5e0da837-f776-4a61-a361-e3a2c5f4a750","Type":"ContainerDied","Data":"b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a"} Feb 17 12:50:05.185439 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.185328 2562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" Feb 17 12:50:05.185439 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.185344 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-799fd54bdd-lm8dz" event={"ID":"5e0da837-f776-4a61-a361-e3a2c5f4a750","Type":"ContainerDied","Data":"6768f664e678d9c7bac3f3c312a29cec2e1411e100974103f657803a0ec191cb"} Feb 17 12:50:05.185439 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.185365 2562 scope.go:117] "RemoveContainer" containerID="b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a" Feb 17 12:50:05.193296 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.193278 2562 scope.go:117] "RemoveContainer" containerID="b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a" Feb 17 12:50:05.193547 ip-10-0-132-113 kubenswrapper[2562]: E0217 12:50:05.193528 2562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a\": container with ID starting with b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a not found: ID does not exist" containerID="b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a" Feb 17 12:50:05.193623 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.193557 2562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a"} err="failed to get container status \"b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a\": rpc error: code = NotFound desc = could not find container \"b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a\": container with ID starting with b2add7d1eb2fc44034a7299ca88209450c18f71b9960367d235e38096e11a16a not found: ID does not exist" Feb 17 12:50:05.212867 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.212846 2562 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-799fd54bdd-lm8dz"] Feb 17 12:50:05.219622 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.219605 2562 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-799fd54bdd-lm8dz"] Feb 17 12:50:05.540796 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:05.540771 2562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0da837-f776-4a61-a361-e3a2c5f4a750" path="/var/lib/kubelet/pods/5e0da837-f776-4a61-a361-e3a2c5f4a750/volumes" Feb 17 12:50:14.706378 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:14.706341 2562 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" podUID="c4816a20-7b75-4aa1-a0f3-b366e10dba38" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 12:50:20.516746 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:20.516708 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:50:20.518935 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:20.518912 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bb13ada-375a-497d-aced-02307525f449-metrics-certs\") pod \"network-metrics-daemon-j6r5z\" (UID: \"7bb13ada-375a-497d-aced-02307525f449\") " pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:50:20.540369 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:20.540345 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-b7dtp\"" Feb 17 12:50:20.547713 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:20.547698 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j6r5z" Feb 17 12:50:20.658354 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:20.658326 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j6r5z"] Feb 17 12:50:20.661716 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:50:20.661690 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bb13ada_375a_497d_aced_02307525f449.slice/crio-be9dc5c2f99e703929b91774b33ce0ab039495b9d7a40ccf16fe8a0b17fdf02b WatchSource:0}: Error finding container be9dc5c2f99e703929b91774b33ce0ab039495b9d7a40ccf16fe8a0b17fdf02b: Status 404 returned error can't find the container with id be9dc5c2f99e703929b91774b33ce0ab039495b9d7a40ccf16fe8a0b17fdf02b Feb 17 12:50:21.226567 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:21.226527 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j6r5z" event={"ID":"7bb13ada-375a-497d-aced-02307525f449","Type":"ContainerStarted","Data":"be9dc5c2f99e703929b91774b33ce0ab039495b9d7a40ccf16fe8a0b17fdf02b"} Feb 17 12:50:22.230163 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:22.230128 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j6r5z" event={"ID":"7bb13ada-375a-497d-aced-02307525f449","Type":"ContainerStarted","Data":"6186b5880cc1ecccceb68af12f0a01d0e1b935cbb9725abca263b79aa0444db6"} Feb 17 12:50:22.230163 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:22.230165 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j6r5z" event={"ID":"7bb13ada-375a-497d-aced-02307525f449","Type":"ContainerStarted","Data":"c1aa0a487dc4a773a24e03ca932f8977315a47f047da41b95968472a7aa2e60f"} Feb 17 12:50:22.245959 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:22.245910 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-j6r5z" podStartSLOduration=252.368708754 podStartE2EDuration="4m13.245894887s" podCreationTimestamp="2026-02-17 12:46:09 +0000 UTC" firstStartedPulling="2026-02-17 12:50:20.663447713 +0000 UTC m=+251.638211086" lastFinishedPulling="2026-02-17 12:50:21.540633832 +0000 UTC m=+252.515397219" observedRunningTime="2026-02-17 12:50:22.244102718 +0000 UTC m=+253.218866127" watchObservedRunningTime="2026-02-17 12:50:22.245894887 +0000 UTC m=+253.220658284" Feb 17 12:50:24.706213 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:24.706179 2562 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" podUID="c4816a20-7b75-4aa1-a0f3-b366e10dba38" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 12:50:24.706649 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:24.706252 2562 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" Feb 17 12:50:24.706705 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:24.706671 2562 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="service-proxy" containerStatusID={"Type":"cri-o","ID":"8479c3180aadb3e50114be0b10672eee9cd1a909be54f131f57227e54efaaee8"} pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" containerMessage="Container service-proxy failed liveness probe, will be restarted" Feb 17 12:50:24.706747 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:24.706705 2562 kuberuntime_container.go:864] "Killing container with a grace period" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" podUID="c4816a20-7b75-4aa1-a0f3-b366e10dba38" containerName="service-proxy" containerID="cri-o://8479c3180aadb3e50114be0b10672eee9cd1a909be54f131f57227e54efaaee8" gracePeriod=30 Feb 17 12:50:25.242742 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:25.242709 2562 generic.go:358] "Generic (PLEG): container finished" podID="c4816a20-7b75-4aa1-a0f3-b366e10dba38" containerID="8479c3180aadb3e50114be0b10672eee9cd1a909be54f131f57227e54efaaee8" exitCode=2 Feb 17 12:50:25.242873 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:25.242751 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" event={"ID":"c4816a20-7b75-4aa1-a0f3-b366e10dba38","Type":"ContainerDied","Data":"8479c3180aadb3e50114be0b10672eee9cd1a909be54f131f57227e54efaaee8"} Feb 17 12:50:25.242873 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:50:25.242782 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-784bf8c68f-6mtl9" event={"ID":"c4816a20-7b75-4aa1-a0f3-b366e10dba38","Type":"ContainerStarted","Data":"2a273fe6ebfab4d1a59bcbd9234e366c862066e7281113dd0e071a13d3881813"} Feb 17 12:51:09.498118 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:51:09.498091 2562 kubelet.go:1628] "Image garbage collection succeeded" Feb 17 12:58:04.551318 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.551239 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["test-ns-feast/feast-test-apply-9lk6h"] Feb 17 12:58:04.551718 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.551456 2562 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e0da837-f776-4a61-a361-e3a2c5f4a750" containerName="registry" Feb 17 12:58:04.551718 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.551485 2562 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0da837-f776-4a61-a361-e3a2c5f4a750" containerName="registry" Feb 17 12:58:04.551718 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.551539 2562 memory_manager.go:356] "RemoveStaleState removing state" podUID="5e0da837-f776-4a61-a361-e3a2c5f4a750" containerName="registry" Feb 17 12:58:04.554352 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.554329 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-feast/feast-test-apply-9lk6h" Feb 17 12:58:04.556789 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.556746 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-feast\"/\"kube-root-ca.crt\"" Feb 17 12:58:04.556881 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.556821 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"test-ns-feast\"/\"feast-credit-scoring-dockercfg-p26jk\"" Feb 17 12:58:04.557853 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.557839 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"test-ns-feast\"/\"openshift-service-ca.crt\"" Feb 17 12:58:04.559853 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.559831 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-feast/feast-test-apply-9lk6h"] Feb 17 12:58:04.660905 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.660871 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4fnh\" (UniqueName: \"kubernetes.io/projected/1cae3dd9-6496-4641-a4b5-217ea8a4fad1-kube-api-access-n4fnh\") pod \"feast-test-apply-9lk6h\" (UID: \"1cae3dd9-6496-4641-a4b5-217ea8a4fad1\") " pod="test-ns-feast/feast-test-apply-9lk6h" Feb 17 12:58:04.761364 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.761338 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4fnh\" (UniqueName: \"kubernetes.io/projected/1cae3dd9-6496-4641-a4b5-217ea8a4fad1-kube-api-access-n4fnh\") pod \"feast-test-apply-9lk6h\" (UID: \"1cae3dd9-6496-4641-a4b5-217ea8a4fad1\") " pod="test-ns-feast/feast-test-apply-9lk6h" Feb 17 12:58:04.768574 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.768550 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4fnh\" (UniqueName: \"kubernetes.io/projected/1cae3dd9-6496-4641-a4b5-217ea8a4fad1-kube-api-access-n4fnh\") pod \"feast-test-apply-9lk6h\" (UID: \"1cae3dd9-6496-4641-a4b5-217ea8a4fad1\") " pod="test-ns-feast/feast-test-apply-9lk6h" Feb 17 12:58:04.863895 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.863827 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="test-ns-feast/feast-test-apply-9lk6h" Feb 17 12:58:04.976030 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.976002 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["test-ns-feast/feast-test-apply-9lk6h"] Feb 17 12:58:04.979307 ip-10-0-132-113 kubenswrapper[2562]: W0217 12:58:04.979279 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cae3dd9_6496_4641_a4b5_217ea8a4fad1.slice/crio-335385a5199a448bb6f57521143f77e145bf155205adcd1c623bdafcf298d917 WatchSource:0}: Error finding container 335385a5199a448bb6f57521143f77e145bf155205adcd1c623bdafcf298d917: Status 404 returned error can't find the container with id 335385a5199a448bb6f57521143f77e145bf155205adcd1c623bdafcf298d917 Feb 17 12:58:04.980968 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:04.980952 2562 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 12:58:05.377022 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:05.376990 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-feast/feast-test-apply-9lk6h" event={"ID":"1cae3dd9-6496-4641-a4b5-217ea8a4fad1","Type":"ContainerStarted","Data":"335385a5199a448bb6f57521143f77e145bf155205adcd1c623bdafcf298d917"} Feb 17 12:58:09.388815 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:09.388776 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-feast/feast-test-apply-9lk6h" event={"ID":"1cae3dd9-6496-4641-a4b5-217ea8a4fad1","Type":"ContainerStarted","Data":"227f391c575f31ad413bcb63f346a000e430df6a239ad4e3b45c571ee30228b2"} Feb 17 12:58:13.401155 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:13.401123 2562 generic.go:358] "Generic (PLEG): container finished" podID="1cae3dd9-6496-4641-a4b5-217ea8a4fad1" containerID="227f391c575f31ad413bcb63f346a000e430df6a239ad4e3b45c571ee30228b2" exitCode=0 Feb 17 12:58:13.401531 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:13.401191 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-feast/feast-test-apply-9lk6h" event={"ID":"1cae3dd9-6496-4641-a4b5-217ea8a4fad1","Type":"ContainerDied","Data":"227f391c575f31ad413bcb63f346a000e430df6a239ad4e3b45c571ee30228b2"} Feb 17 12:58:14.405187 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:14.405156 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-feast/feast-test-apply-9lk6h" event={"ID":"1cae3dd9-6496-4641-a4b5-217ea8a4fad1","Type":"ContainerStarted","Data":"6cd595f4e4be43b68642f28d45e19da76cfca2b2abc1f7c342ab365ccdb46ba3"} Feb 17 12:58:14.418902 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:14.418864 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="test-ns-feast/feast-test-apply-9lk6h" podStartSLOduration=6.161530615 podStartE2EDuration="10.418851548s" podCreationTimestamp="2026-02-17 12:58:04 +0000 UTC" firstStartedPulling="2026-02-17 12:58:04.981073479 +0000 UTC m=+715.955836852" lastFinishedPulling="2026-02-17 12:58:09.238394401 +0000 UTC m=+720.213157785" observedRunningTime="2026-02-17 12:58:14.418148901 +0000 UTC m=+725.392912274" watchObservedRunningTime="2026-02-17 12:58:14.418851548 +0000 UTC m=+725.393614942" Feb 17 12:58:20.428952 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:20.428916 2562 generic.go:358] "Generic (PLEG): container finished" podID="1cae3dd9-6496-4641-a4b5-217ea8a4fad1" containerID="6cd595f4e4be43b68642f28d45e19da76cfca2b2abc1f7c342ab365ccdb46ba3" exitCode=0 Feb 17 12:58:20.429352 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:20.428986 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-feast/feast-test-apply-9lk6h" event={"ID":"1cae3dd9-6496-4641-a4b5-217ea8a4fad1","Type":"ContainerDied","Data":"6cd595f4e4be43b68642f28d45e19da76cfca2b2abc1f7c342ab365ccdb46ba3"} Feb 17 12:58:21.548261 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:21.548241 2562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-feast/feast-test-apply-9lk6h" Feb 17 12:58:21.575423 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:21.575404 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4fnh\" (UniqueName: \"kubernetes.io/projected/1cae3dd9-6496-4641-a4b5-217ea8a4fad1-kube-api-access-n4fnh\") pod \"1cae3dd9-6496-4641-a4b5-217ea8a4fad1\" (UID: \"1cae3dd9-6496-4641-a4b5-217ea8a4fad1\") " Feb 17 12:58:21.577401 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:21.577373 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cae3dd9-6496-4641-a4b5-217ea8a4fad1-kube-api-access-n4fnh" (OuterVolumeSpecName: "kube-api-access-n4fnh") pod "1cae3dd9-6496-4641-a4b5-217ea8a4fad1" (UID: "1cae3dd9-6496-4641-a4b5-217ea8a4fad1"). InnerVolumeSpecName "kube-api-access-n4fnh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 12:58:21.675893 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:21.675868 2562 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4fnh\" (UniqueName: \"kubernetes.io/projected/1cae3dd9-6496-4641-a4b5-217ea8a4fad1-kube-api-access-n4fnh\") on node \"ip-10-0-132-113.ec2.internal\" DevicePath \"\"" Feb 17 12:58:22.435377 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:22.435338 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="test-ns-feast/feast-test-apply-9lk6h" event={"ID":"1cae3dd9-6496-4641-a4b5-217ea8a4fad1","Type":"ContainerDied","Data":"335385a5199a448bb6f57521143f77e145bf155205adcd1c623bdafcf298d917"} Feb 17 12:58:22.435377 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:22.435377 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="335385a5199a448bb6f57521143f77e145bf155205adcd1c623bdafcf298d917" Feb 17 12:58:22.435377 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:22.435353 2562 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="test-ns-feast/feast-test-apply-9lk6h" Feb 17 12:58:22.598334 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:22.598309 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/test-ns-feast_feast-test-apply-9lk6h_1cae3dd9-6496-4641-a4b5-217ea8a4fad1/feast-0/0.log" Feb 17 12:58:22.603026 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:22.602997 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/test-ns-feast_feast-test-apply-9lk6h_1cae3dd9-6496-4641-a4b5-217ea8a4fad1/feast-1/0.log" Feb 17 12:58:46.685116 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:46.685085 2562 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["test-ns-feast/feast-test-apply-9lk6h"] Feb 17 12:58:46.690572 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:46.690545 2562 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["test-ns-feast/feast-test-apply-9lk6h"] Feb 17 12:58:47.540720 ip-10-0-132-113 kubenswrapper[2562]: I0217 12:58:47.540688 2562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cae3dd9-6496-4641-a4b5-217ea8a4fad1" path="/var/lib/kubelet/pods/1cae3dd9-6496-4641-a4b5-217ea8a4fad1/volumes" Feb 17 13:04:09.543617 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:04:09.543552 2562 scope.go:117] "RemoveContainer" containerID="227f391c575f31ad413bcb63f346a000e430df6a239ad4e3b45c571ee30228b2" Feb 17 13:05:09.554609 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:05:09.554579 2562 scope.go:117] "RemoveContainer" containerID="6cd595f4e4be43b68642f28d45e19da76cfca2b2abc1f7c342ab365ccdb46ba3" Feb 17 13:09:30.023611 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.023573 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7x4hp/must-gather-svkhf"] Feb 17 13:09:30.024160 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.024036 2562 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1cae3dd9-6496-4641-a4b5-217ea8a4fad1" containerName="feast-1" Feb 17 13:09:30.024160 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.024053 2562 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cae3dd9-6496-4641-a4b5-217ea8a4fad1" containerName="feast-1" Feb 17 13:09:30.024160 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.024089 2562 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1cae3dd9-6496-4641-a4b5-217ea8a4fad1" containerName="feast-0" Feb 17 13:09:30.024160 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.024098 2562 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cae3dd9-6496-4641-a4b5-217ea8a4fad1" containerName="feast-0" Feb 17 13:09:30.024356 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.024203 2562 memory_manager.go:356] "RemoveStaleState removing state" podUID="1cae3dd9-6496-4641-a4b5-217ea8a4fad1" containerName="feast-1" Feb 17 13:09:30.027444 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.027418 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7x4hp/must-gather-svkhf" Feb 17 13:09:30.030089 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.030064 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-7x4hp\"/\"kube-root-ca.crt\"" Feb 17 13:09:30.030205 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.030140 2562 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-7x4hp\"/\"openshift-service-ca.crt\"" Feb 17 13:09:30.030205 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.030150 2562 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-7x4hp\"/\"default-dockercfg-vnmgq\"" Feb 17 13:09:30.032482 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.032447 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7x4hp/must-gather-svkhf"] Feb 17 13:09:30.098612 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.098587 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3a681322-16d9-4f49-85d1-01691f1646b8-must-gather-output\") pod \"must-gather-svkhf\" (UID: \"3a681322-16d9-4f49-85d1-01691f1646b8\") " pod="openshift-must-gather-7x4hp/must-gather-svkhf" Feb 17 13:09:30.098730 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.098623 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8s7d\" (UniqueName: \"kubernetes.io/projected/3a681322-16d9-4f49-85d1-01691f1646b8-kube-api-access-p8s7d\") pod \"must-gather-svkhf\" (UID: \"3a681322-16d9-4f49-85d1-01691f1646b8\") " pod="openshift-must-gather-7x4hp/must-gather-svkhf" Feb 17 13:09:30.199026 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.199000 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3a681322-16d9-4f49-85d1-01691f1646b8-must-gather-output\") pod \"must-gather-svkhf\" (UID: \"3a681322-16d9-4f49-85d1-01691f1646b8\") " pod="openshift-must-gather-7x4hp/must-gather-svkhf" Feb 17 13:09:30.199129 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.199036 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8s7d\" (UniqueName: \"kubernetes.io/projected/3a681322-16d9-4f49-85d1-01691f1646b8-kube-api-access-p8s7d\") pod \"must-gather-svkhf\" (UID: \"3a681322-16d9-4f49-85d1-01691f1646b8\") " pod="openshift-must-gather-7x4hp/must-gather-svkhf" Feb 17 13:09:30.199291 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.199274 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3a681322-16d9-4f49-85d1-01691f1646b8-must-gather-output\") pod \"must-gather-svkhf\" (UID: \"3a681322-16d9-4f49-85d1-01691f1646b8\") " pod="openshift-must-gather-7x4hp/must-gather-svkhf" Feb 17 13:09:30.206153 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.206127 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8s7d\" (UniqueName: \"kubernetes.io/projected/3a681322-16d9-4f49-85d1-01691f1646b8-kube-api-access-p8s7d\") pod \"must-gather-svkhf\" (UID: \"3a681322-16d9-4f49-85d1-01691f1646b8\") " pod="openshift-must-gather-7x4hp/must-gather-svkhf" Feb 17 13:09:30.336429 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.336366 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7x4hp/must-gather-svkhf" Feb 17 13:09:30.447967 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.447936 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7x4hp/must-gather-svkhf"] Feb 17 13:09:30.450975 ip-10-0-132-113 kubenswrapper[2562]: W0217 13:09:30.450948 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a681322_16d9_4f49_85d1_01691f1646b8.slice/crio-cadae7f3506f0885e0230c63c271afbbe8e7ec64e299cf2e6599c4d60779fe96 WatchSource:0}: Error finding container cadae7f3506f0885e0230c63c271afbbe8e7ec64e299cf2e6599c4d60779fe96: Status 404 returned error can't find the container with id cadae7f3506f0885e0230c63c271afbbe8e7ec64e299cf2e6599c4d60779fe96 Feb 17 13:09:30.452786 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:30.452770 2562 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 13:09:31.077599 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:31.077563 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7x4hp/must-gather-svkhf" event={"ID":"3a681322-16d9-4f49-85d1-01691f1646b8","Type":"ContainerStarted","Data":"cadae7f3506f0885e0230c63c271afbbe8e7ec64e299cf2e6599c4d60779fe96"} Feb 17 13:09:32.082410 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:32.082366 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7x4hp/must-gather-svkhf" event={"ID":"3a681322-16d9-4f49-85d1-01691f1646b8","Type":"ContainerStarted","Data":"fc992f4ec284ec2e75eab1847ac89870b50c99f252b307fe1e22392d39e19df9"} Feb 17 13:09:32.082410 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:32.082416 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7x4hp/must-gather-svkhf" event={"ID":"3a681322-16d9-4f49-85d1-01691f1646b8","Type":"ContainerStarted","Data":"5847131ee279d49ff70a2d94135bd807b7e48f43ef61596b3dcdbc3341eff18d"} Feb 17 13:09:32.097411 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:32.097369 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7x4hp/must-gather-svkhf" podStartSLOduration=1.2332299230000001 podStartE2EDuration="2.097356284s" podCreationTimestamp="2026-02-17 13:09:30 +0000 UTC" firstStartedPulling="2026-02-17 13:09:30.452894402 +0000 UTC m=+1401.427657775" lastFinishedPulling="2026-02-17 13:09:31.31702076 +0000 UTC m=+1402.291784136" observedRunningTime="2026-02-17 13:09:32.095457782 +0000 UTC m=+1403.070221177" watchObservedRunningTime="2026-02-17 13:09:32.097356284 +0000 UTC m=+1403.072119679" Feb 17 13:09:32.784501 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:32.784452 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-xk97p_f84c60a8-1cec-4e96-8e96-b52be431e4ed/global-pull-secret-syncer/0.log" Feb 17 13:09:32.831804 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:32.831777 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-4cf5r_26e5edd9-76b4-4050-a8bc-3e88e1993210/konnectivity-agent/0.log" Feb 17 13:09:32.953649 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:32.953621 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-132-113.ec2.internal_789ba31dc9b07674daaa230ccc26ae9e/haproxy/0.log" Feb 17 13:09:36.111908 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:36.111881 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-sxsff_30b1e3ce-3b31-4b13-aeb2-3d6818e33206/node-exporter/0.log" Feb 17 13:09:36.132998 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:36.132969 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-sxsff_30b1e3ce-3b31-4b13-aeb2-3d6818e33206/kube-rbac-proxy/0.log" Feb 17 13:09:36.153913 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:36.153884 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-sxsff_30b1e3ce-3b31-4b13-aeb2-3d6818e33206/init-textfile/0.log" Feb 17 13:09:37.904392 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:37.904332 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-5b897fd5fc-9ppd9_ebe280b5-2859-4a11-b980-08a9489a08c6/networking-console-plugin/0.log" Feb 17 13:09:39.482635 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.482600 2562 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv"] Feb 17 13:09:39.487459 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.487440 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.494698 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.494672 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv"] Feb 17 13:09:39.577575 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.577542 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-sys\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.577759 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.577588 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-proc\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.577759 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.577673 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-lib-modules\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.577759 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.577705 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrll2\" (UniqueName: \"kubernetes.io/projected/7a4d55d1-470c-4246-aa75-39b180c9304b-kube-api-access-vrll2\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.577759 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.577739 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-podres\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.678340 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.678303 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-lib-modules\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.678340 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.678339 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vrll2\" (UniqueName: \"kubernetes.io/projected/7a4d55d1-470c-4246-aa75-39b180c9304b-kube-api-access-vrll2\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.678579 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.678369 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-podres\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.678579 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.678405 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-sys\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.678579 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.678439 2562 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-proc\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.678579 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.678517 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-sys\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.678579 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.678517 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-lib-modules\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.678579 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.678559 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-podres\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.678579 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.678562 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/7a4d55d1-470c-4246-aa75-39b180c9304b-proc\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.685356 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.685332 2562 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrll2\" (UniqueName: \"kubernetes.io/projected/7a4d55d1-470c-4246-aa75-39b180c9304b-kube-api-access-vrll2\") pod \"perf-node-gather-daemonset-6b5fv\" (UID: \"7a4d55d1-470c-4246-aa75-39b180c9304b\") " pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.700722 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.700695 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-p2fjr_773c39a0-9562-40a2-a21c-4ae429c7b782/dns/0.log" Feb 17 13:09:39.720980 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.720960 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-p2fjr_773c39a0-9562-40a2-a21c-4ae429c7b782/kube-rbac-proxy/0.log" Feb 17 13:09:39.764573 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.764523 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-h4hcc_d9f14b23-72e9-4631-b0c5-d568aca52c29/dns-node-resolver/0.log" Feb 17 13:09:39.799967 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.799944 2562 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:39.928957 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:39.928929 2562 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv"] Feb 17 13:09:39.932109 ip-10-0-132-113 kubenswrapper[2562]: W0217 13:09:39.932079 2562 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7a4d55d1_470c_4246_aa75_39b180c9304b.slice/crio-c6a7103f781a6b3313af179f0c194c138fe0a59c79bacffc8228c0a19514565c WatchSource:0}: Error finding container c6a7103f781a6b3313af179f0c194c138fe0a59c79bacffc8228c0a19514565c: Status 404 returned error can't find the container with id c6a7103f781a6b3313af179f0c194c138fe0a59c79bacffc8228c0a19514565c Feb 17 13:09:40.109396 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:40.109324 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" event={"ID":"7a4d55d1-470c-4246-aa75-39b180c9304b","Type":"ContainerStarted","Data":"3cca193a170fdd9ef31385dae12a42faf8d6313846a0ed8670fc90b255668f7d"} Feb 17 13:09:40.109396 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:40.109360 2562 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" event={"ID":"7a4d55d1-470c-4246-aa75-39b180c9304b","Type":"ContainerStarted","Data":"c6a7103f781a6b3313af179f0c194c138fe0a59c79bacffc8228c0a19514565c"} Feb 17 13:09:40.109602 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:40.109454 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:40.125576 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:40.125537 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" podStartSLOduration=1.125523632 podStartE2EDuration="1.125523632s" podCreationTimestamp="2026-02-17 13:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:09:40.124794241 +0000 UTC m=+1411.099557638" watchObservedRunningTime="2026-02-17 13:09:40.125523632 +0000 UTC m=+1411.100287026" Feb 17 13:09:40.215336 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:40.215306 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-7rjvn_9f08688a-aac4-4adb-b2bd-90ffe60387e3/node-ca/0.log" Feb 17 13:09:41.163155 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:41.163127 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-6hddw_ed10247d-2496-4512-b5e3-24cf0aa60754/serve-healthcheck-canary/0.log" Feb 17 13:09:41.538003 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:41.537979 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-5nsdw_404bc608-9e88-47b5-a591-0fea2d2e6db0/kube-rbac-proxy/0.log" Feb 17 13:09:41.559619 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:41.559589 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-5nsdw_404bc608-9e88-47b5-a591-0fea2d2e6db0/exporter/0.log" Feb 17 13:09:41.581682 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:41.581664 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-5nsdw_404bc608-9e88-47b5-a591-0fea2d2e6db0/extractor/0.log" Feb 17 13:09:46.121398 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:46.121373 2562 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-7x4hp/perf-node-gather-daemonset-6b5fv" Feb 17 13:09:46.645147 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:46.645121 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tsksj_ea2f2f4d-70c4-4bc8-9640-5b18d2e41173/kube-multus-additional-cni-plugins/0.log" Feb 17 13:09:46.665998 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:46.665975 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tsksj_ea2f2f4d-70c4-4bc8-9640-5b18d2e41173/egress-router-binary-copy/0.log" Feb 17 13:09:46.688270 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:46.688245 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tsksj_ea2f2f4d-70c4-4bc8-9640-5b18d2e41173/cni-plugins/0.log" Feb 17 13:09:46.708475 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:46.708449 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tsksj_ea2f2f4d-70c4-4bc8-9640-5b18d2e41173/bond-cni-plugin/0.log" Feb 17 13:09:46.728770 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:46.728754 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tsksj_ea2f2f4d-70c4-4bc8-9640-5b18d2e41173/routeoverride-cni/0.log" Feb 17 13:09:46.749029 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:46.749010 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tsksj_ea2f2f4d-70c4-4bc8-9640-5b18d2e41173/whereabouts-cni-bincopy/0.log" Feb 17 13:09:46.769383 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:46.769363 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tsksj_ea2f2f4d-70c4-4bc8-9640-5b18d2e41173/whereabouts-cni/0.log" Feb 17 13:09:46.947910 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:46.947860 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g5h5b_b6d826a0-9201-49f4-8abb-ecf10f525a7e/kube-multus/0.log" Feb 17 13:09:47.069169 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:47.069146 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-j6r5z_7bb13ada-375a-497d-aced-02307525f449/network-metrics-daemon/0.log" Feb 17 13:09:47.088908 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:47.088889 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-j6r5z_7bb13ada-375a-497d-aced-02307525f449/kube-rbac-proxy/0.log" Feb 17 13:09:48.519427 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:48.519354 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhj7q_7282992d-70a3-40ce-96b1-03529732a700/ovn-controller/0.log" Feb 17 13:09:48.547450 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:48.547416 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhj7q_7282992d-70a3-40ce-96b1-03529732a700/ovn-acl-logging/0.log" Feb 17 13:09:48.569430 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:48.569400 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhj7q_7282992d-70a3-40ce-96b1-03529732a700/kube-rbac-proxy-node/0.log" Feb 17 13:09:48.593026 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:48.593001 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhj7q_7282992d-70a3-40ce-96b1-03529732a700/kube-rbac-proxy-ovn-metrics/0.log" Feb 17 13:09:48.615893 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:48.615868 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhj7q_7282992d-70a3-40ce-96b1-03529732a700/northd/0.log" Feb 17 13:09:48.641621 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:48.641592 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhj7q_7282992d-70a3-40ce-96b1-03529732a700/nbdb/0.log" Feb 17 13:09:48.663441 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:48.663417 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhj7q_7282992d-70a3-40ce-96b1-03529732a700/sbdb/0.log" Feb 17 13:09:48.863867 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:48.863836 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mhj7q_7282992d-70a3-40ce-96b1-03529732a700/ovnkube-controller/0.log" Feb 17 13:09:49.790657 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:49.790622 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-v7bfv_89c079c8-3da1-4b99-a30a-3d749cf8f842/network-check-target-container/0.log" Feb 17 13:09:50.758212 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:50.758189 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-wkhkn_5c5191ba-7e5f-4e80-8ce5-eac97ea608dc/iptables-alerter/0.log" Feb 17 13:09:51.327192 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:51.327160 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-l2dwp_4d5674be-49b1-49d0-b07d-656b923994f0/tuned/0.log" Feb 17 13:09:54.413274 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:54.413242 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-6mljf_47c476ed-1111-4f04-920f-dc8a70a378a0/csi-driver/0.log" Feb 17 13:09:54.433680 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:54.433655 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-6mljf_47c476ed-1111-4f04-920f-dc8a70a378a0/csi-node-driver-registrar/0.log" Feb 17 13:09:54.453913 ip-10-0-132-113 kubenswrapper[2562]: I0217 13:09:54.453886 2562 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-6mljf_47c476ed-1111-4f04-920f-dc8a70a378a0/csi-liveness-probe/0.log"